[
  {
    "path": ".github/ISSUE_TEMPLATE.md",
    "content": "Please, set a title for the issue that starts with \"BUG\" or \"FEATURE REQUEST\" accordingly. Use [the Pinbox Discord Server](https://discord.gg/9Qae7fT) for supporting issues.\n\n<!--- Provide a general summary of the issue in the Title above -->\n\n## Expected Behavior\n<!--- Tell us what should happen -->\n\n## Current Behavior\n<!--- Tell us what happens instead of the expected behavior -->\n\n## Possible Solution\n<!--- Not obligatory, but suggest a fix/reason for the bug, -->\n\n## Steps to Reproduce\n<!--- Provide a link to a live example, or an unambiguous set of steps to -->\n<!--- reproduce this bug. Include code to reproduce, if relevant -->\n1.\n2.\n3.\n4.\n\n## Context (Environment)\n<!--- How has this issue affected you? What are you trying to accomplish? -->\n<!--- Providing context helps us come up with a solution that is most useful in the real world -->\n\n<!--- Provide a general summary of the issue in the Title above -->\n\n## Detailed Description\n<!--- Provide a detailed description of the change or addition you are proposing -->\n\n## Possible Implementation\n<!--- Not obligatory, but suggest an idea for implementing addition or change -->\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "content": "Please, set a title for the pull request that starts with \"BUG\" or \"FEATURE REQUEST\" accordingly.\n\n<!--- Provide a general summary of your changes in the Title above -->\n\n## Description\n<!--- Describe your changes in detail -->\n\n## Related Issue\n<!--- This project only accepts pull requests related to open issues -->\n<!--- If suggesting a new feature or change, please discuss it in an issue first -->\n<!--- If fixing a bug, there should be an issue describing it with steps to reproduce -->\n<!--- Please link to the issue here: -->\n\n## Motivation and Context\n<!--- Why is this change required? What problem does it solve? -->\n<!--- If it fixes an open issue, please link to the issue here. -->\n\n## How Has This Been Tested?\n<!--- Please describe in detail how you tested your changes. -->\n<!--- Include details of your testing environment, and the tests you ran to -->\n<!--- see how your change affects other areas of the code, etc. -->\n\n## Screenshots (if appropriate):\n"
  },
  {
    "path": ".gitignore",
    "content": "## Ignore Visual Studio temporary files, build results, and\r\n## files generated by popular Visual Studio add-ons.\r\n##\r\n## Get latest from https://github.com/github/gitignore/blob/master/VisualStudio.gitignore\r\n\r\n# User-specific files\r\n*.suo\r\n*.user\r\n*.userosscache\r\n*.sln.docstates\r\n\r\n# User-specific files (MonoDevelop/Xamarin Studio)\r\n*.userprefs\r\n\r\n# Build results\r\n[Dd]ebug/\r\n[Dd]ebugPublic/\r\n[Rr]elease/\r\n[Rr]eleases/\r\nx64/\r\nx86/\r\nbld/\r\n[Bb]in/\r\n[Oo]bj/\r\n[Ll]og/\r\n\r\n# Visual Studio 2015/2017 cache/options directory\r\n.vs/\r\n# Uncomment if you have tasks that create the project's static files in wwwroot\r\n#wwwroot/\r\n\r\n# Visual Studio 2017 auto generated files\r\nGenerated\\ Files/\r\n\r\n# MSTest test Results\r\n[Tt]est[Rr]esult*/\r\n[Bb]uild[Ll]og.*\r\n\r\n# NUNIT\r\n*.VisualState.xml\r\nTestResult.xml\r\n\r\n# Build Results of an ATL Project\r\n[Dd]ebugPS/\r\n[Rr]eleasePS/\r\ndlldata.c\r\n\r\n# Benchmark Results\r\nBenchmarkDotNet.Artifacts/\r\n\r\n# .NET Core\r\nproject.lock.json\r\nproject.fragment.lock.json\r\nartifacts/\r\n\r\n# StyleCop\r\nStyleCopReport.xml\r\n\r\n# Files built by Visual Studio\r\n*_i.c\r\n*_p.c\r\n*_i.h\r\n*.ilk\r\n*.meta\r\n*.obj\r\n*.iobj\r\n*.pch\r\n*.pdb\r\n*.ipdb\r\n*.pgc\r\n*.pgd\r\n*.rsp\r\n*.sbr\r\n*.tlb\r\n*.tli\r\n*.tlh\r\n*.tmp\r\n*.tmp_proj\r\n*.log\r\n*.vspscc\r\n*.vssscc\r\n.builds\r\n*.pidb\r\n*.svclog\r\n*.scc\r\n\r\n# Chutzpah Test files\r\n_Chutzpah*\r\n\r\n# Visual C++ cache files\r\nipch/\r\n*.aps\r\n*.ncb\r\n*.opendb\r\n*.opensdf\r\n*.sdf\r\n*.cachefile\r\n*.VC.db\r\n*.VC.VC.opendb\r\n\r\n# Visual Studio profiler\r\n*.psess\r\n*.vsp\r\n*.vspx\r\n*.sap\r\n\r\n# Visual Studio Trace Files\r\n*.e2e\r\n\r\n# TFS 2012 Local Workspace\r\n$tf/\r\n\r\n# Guidance Automation Toolkit\r\n*.gpState\r\n\r\n# ReSharper is a .NET coding add-in\r\n_ReSharper*/\r\n*.[Rr]e[Ss]harper\r\n*.DotSettings.user\r\n\r\n# JustCode is a .NET coding add-in\r\n.JustCode\r\n\r\n# TeamCity is a build add-in\r\n_TeamCity*\r\n\r\n# DotCover is a Code Coverage Tool\r\n*.dotCover\r\n\r\n# AxoCover is a Code Coverage Tool\r\n.axoCover/*\r\n!.axoCover/settings.json\r\n\r\n# Visual Studio code coverage results\r\n*.coverage\r\n*.coveragexml\r\n\r\n# NCrunch\r\n_NCrunch_*\r\n.*crunch*.local.xml\r\nnCrunchTemp_*\r\n\r\n# MightyMoose\r\n*.mm.*\r\nAutoTest.Net/\r\n\r\n# Web workbench (sass)\r\n.sass-cache/\r\n\r\n# Installshield output folder\r\n[Ee]xpress/\r\n\r\n# DocProject is a documentation generator add-in\r\nDocProject/buildhelp/\r\nDocProject/Help/*.HxT\r\nDocProject/Help/*.HxC\r\nDocProject/Help/*.hhc\r\nDocProject/Help/*.hhk\r\nDocProject/Help/*.hhp\r\nDocProject/Help/Html2\r\nDocProject/Help/html\r\n\r\n# Click-Once directory\r\npublish/\r\n\r\n# Publish Web Output\r\n*.[Pp]ublish.xml\r\n*.azurePubxml\r\n# Note: Comment the next line if you want to checkin your web deploy settings,\r\n# but database connection strings (with potential passwords) will be unencrypted\r\n*.pubxml\r\n*.publishproj\r\n\r\n# Microsoft Azure Web App publish settings. Comment the next line if you want to\r\n# checkin your Azure Web App publish settings, but sensitive information contained\r\n# in these scripts will be unencrypted\r\nPublishScripts/\r\n\r\n# NuGet Packages\r\n*.nupkg\r\n# The packages folder can be ignored because of Package Restore\r\n**/[Pp]ackages/*\r\n# except build/, which is used as an MSBuild target.\r\n!**/[Pp]ackages/build/\r\n# Uncomment if necessary however generally it will be regenerated when needed\r\n#!**/[Pp]ackages/repositories.config\r\n# NuGet v3's project.json files produces more ignorable files\r\n*.nuget.props\r\n*.nuget.targets\r\n\r\n# Microsoft Azure Build Output\r\ncsx/\r\n*.build.csdef\r\n\r\n# Microsoft Azure Emulator\r\necf/\r\nrcf/\r\n\r\n# Windows Store app package directories and files\r\nAppPackages/\r\nBundleArtifacts/\r\nPackage.StoreAssociation.xml\r\n_pkginfo.txt\r\n*.appx\r\n\r\n# Visual Studio cache files\r\n# files ending in .cache can be ignored\r\n*.[Cc]ache\r\n# but keep track of directories ending in .cache\r\n!*.[Cc]ache/\r\n\r\n# Others\r\nClientBin/\r\n~$*\r\n*~\r\n*.dbmdl\r\n*.dbproj.schemaview\r\n*.jfm\r\n*.pfx\r\n*.publishsettings\r\norleans.codegen.cs\r\n\r\n# Including strong name files can present a security risk\r\n# (https://github.com/github/gitignore/pull/2483#issue-259490424)\r\n#*.snk\r\n\r\n# Since there are multiple workflows, uncomment next line to ignore bower_components\r\n# (https://github.com/github/gitignore/pull/1529#issuecomment-104372622)\r\n#bower_components/\r\n\r\n# RIA/Silverlight projects\r\nGenerated_Code/\r\n\r\n# Backup & report files from converting an old project file\r\n# to a newer Visual Studio version. Backup files are not needed,\r\n# because we have git ;-)\r\n_UpgradeReport_Files/\r\nBackup*/\r\nUpgradeLog*.XML\r\nUpgradeLog*.htm\r\nServiceFabricBackup/\r\n*.rptproj.bak\r\n\r\n# SQL Server files\r\n*.mdf\r\n*.ldf\r\n*.ndf\r\n\r\n# Business Intelligence projects\r\n*.rdl.data\r\n*.bim.layout\r\n*.bim_*.settings\r\n*.rptproj.rsuser\r\n\r\n# Microsoft Fakes\r\nFakesAssemblies/\r\n\r\n# GhostDoc plugin setting file\r\n*.GhostDoc.xml\r\n\r\n# Node.js Tools for Visual Studio\r\n.ntvs_analysis.dat\r\nnode_modules/\r\n\r\n# Visual Studio 6 build log\r\n*.plg\r\n\r\n# Visual Studio 6 workspace options file\r\n*.opt\r\n\r\n# Visual Studio 6 auto-generated workspace file (contains which files were open etc.)\r\n*.vbw\r\n\r\n# Visual Studio LightSwitch build output\r\n**/*.HTMLClient/GeneratedArtifacts\r\n**/*.DesktopClient/GeneratedArtifacts\r\n**/*.DesktopClient/ModelManifest.xml\r\n**/*.Server/GeneratedArtifacts\r\n**/*.Server/ModelManifest.xml\r\n_Pvt_Extensions\r\n\r\n# Paket dependency manager\r\n.paket/paket.exe\r\npaket-files/\r\n\r\n# FAKE - F# Make\r\n.fake/\r\n\r\n# JetBrains Rider\r\n.idea/\r\n*.sln.iml\r\n\r\n# CodeRush\r\n.cr/\r\n\r\n# Python Tools for Visual Studio (PTVS)\r\n__pycache__/\r\n*.pyc\r\n\r\n# Cake - Uncomment if you are using it\r\n# tools/**\r\n# !tools/packages.config\r\n\r\n# Tabs Studio\r\n*.tss\r\n\r\n# Telerik's JustMock configuration file\r\n*.jmconfig\r\n\r\n# BizTalk build output\r\n*.btp.cs\r\n*.btm.cs\r\n*.odx.cs\r\n*.xsd.cs\r\n\r\n# OpenCover UI analysis results\r\nOpenCover/\r\n\r\n# Azure Stream Analytics local run output\r\nASALocalRun/\r\n\r\n# MSBuild Binary and Structured Log\r\n*.binlog\r\n\r\n# NVidia Nsight GPU debugger configuration file\r\n*.nvuser\r\n\r\n# MFractors (Xamarin productivity tool) working folder\r\n.mfractor/\r\n\r\n# Local History for Visual Studio\r\n.localhistory/\r\n\r\n\r\n#==============================================\r\n# 3DS\r\n#==============================================\r\n*.d\r\n*.o\r\n*.elf\r\n*.3dsx\r\n\r\n#==============================================\r\n# PinBox\r\n#==============================================\r\nPinBox/PinBox/build/\r\nPinBox/PinBox/data/\r\n\r\n#==============================================\r\n# PinBoxServer\r\n#==============================================\r\nPinBoxServer/Debug/\r\nPinBoxServer/Release/\r\n\r\n#==============================================\r\n# Other File\r\n#==============================================\r\n*.rar\r\n*.flv\r\n*.mp4\n*.cia\n*.smdh\nPinBox/PinBox/Pre-Built/PinBox.smdh\nThirdParty/xinput1_3.dll\nThirdParty/x360ce.ini\n"
  },
  {
    "path": "LICENSE",
    "content": "\t\tGLWT(Good Luck With That) Public License\n                 Copyright (c) Everyone, except Author\n\nEveryone is permitted to copy, distribute, modify, merge, sell, publish,\nsublicense or whatever they want with this software but at their OWN RISK.\n\n\t      \t       \t     Preamble\n\nThe author has absolutely no clue what the code in this project does.\nIt might just work or not, there is no third option.\n\n\n                GOOD LUCK WITH THAT PUBLIC LICENSE\n   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION, AND MODIFICATION\n\n  0. You just DO WHATEVER YOU WANT TO as long as you NEVER LEAVE A\nTRACE TO TRACK THE AUTHOR of the original product to blame for or hold\nresponsible.\n\nIN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\nGood luck and Godspeed.\n"
  },
  {
    "path": "PinBox/PinBox/Makefile",
    "content": "export DEVKITPRO = /c/devkitPro\r\nexport DEVKITARM = /c/devkitPro/devkitARM\r\n#---------------------------------------------------------------------------------\r\n.SUFFIXES:\r\n#---------------------------------------------------------------------------------\r\n\r\nifeq ($(strip $(DEVKITARM)),)\r\n$(error \"Please set DEVKITARM in your environment. export DEVKITARM=<path to>devkitARM\")\r\nendif\r\n\r\nTOPDIR ?= $(CURDIR)\r\ninclude $(DEVKITARM)/3ds_rules\r\n\r\n#---------------------------------------------------------------------------------\r\n# TARGET is the name of the output\r\n# BUILD is the directory where object files & intermediate files will be placed\r\n# SOURCES is a list of directories containing source code\r\n# DATA is a list of directories containing data files\r\n# INCLUDES is a list of directories containing header files\r\n#\r\n# NO_SMDH: if set to anything, no SMDH file is generated.\r\n# ROMFS is the directory which contains the RomFS, relative to the Makefile (Optional)\r\n# APP_TITLE is the name of the app stored in the SMDH file (Optional)\r\n# APP_DESCRIPTION is the description of the app stored in the SMDH file (Optional)\r\n# APP_AUTHOR is the author of the app stored in the SMDH file (Optional)\r\n# ICON is the filename of the icon (.png), relative to the project folder.\r\n#   If not set, it attempts to use one of the following (in this order):\r\n#     - <Project name>.png\r\n#     - icon.png\r\n#     - <libctru folder>/default_icon.png\r\n#---------------------------------------------------------------------------------\r\nTARGET\t\t:=\tPinBox\r\nBUILD\t\t:=\tbuild\r\nSOURCES\t\t:=\tsource\r\nDATA\t\t:=\tdata\r\nINCLUDES\t:=\tinclude\r\nICON        :=  assets/icon.png\r\nROMFS\t\t:=\tromfs\r\n\r\n\r\n#---------------------------------------------------------------------------------\r\nAPP_TITLE\t\t\t\t\t:= PinBox\r\nAPP_DESCRIPTION\t\t\t\t:= 3DS Client - PC Desktop Streaming\r\nAPP_AUTHOR\t\t\t\t\t:= Namkazt\r\nAPP_PRODUCT_CODE\t\t\t:= CTR-P-PINBOX\r\nAPP_UNIQUE_ID\t\t\t\t:= 0x18EF\r\nAPP_ENCRYPTED\t\t\t\t:= false\r\n#---------------------------------------------------------------------------------\r\n# options for code generation\r\n#---------------------------------------------------------------------------------\r\nARCH\t:=\t-march=armv6k -mtune=mpcore -mtp=soft -mfloat-abi=hard\r\n\r\nCFLAGS\t:=\t-ggdb -Wall -Og -mword-relocations -DDEBUG=1 \\\r\n\t\t\t\t-ffunction-sections \\\r\n\t\t\t\t$(ARCH)\r\n\tCFLAGS\t+=\t$(INCLUDE) -DARM11 -D_3DS\r\n\tCXXFLAGS\t:= $(CFLAGS) -fno-rtti -fno-exceptions -std=gnu++11 -fpermissive -fexceptions -ffast-math\r\n\tASFLAGS\t:=\t-g $(ARCH)\r\n\tLDFLAGS\t=\t-specs=3dsx.specs -g $(ARCH) -Wl,-Map,$(notdir $*.map)\r\n\tLIBS\t:= -lavformat -lavcodec -lavutil -lswscale -lswresample -lconfig -lcitro3dd -lctrud -lz -lm\r\n#---------------------------------------------------------------------------------\r\n# list of directories containing libraries, this must be the top level containing\r\n# include and lib\r\n#---------------------------------------------------------------------------------\r\nADDITION_LIBS := $(DEVKITPRO)/libwebp $(DEVKITPRO)/libcitro3d\r\nLIBDIRS\t:= $(CTRULIB) $(PORTLIBS) $(ADDITION_LIBS)\r\n\r\n\r\n#---------------------------------------------------------------------------------\r\n# no real need to edit anything past this point unless you need to add additional\r\n# rules for different file extensions\r\n#---------------------------------------------------------------------------------\r\nifneq ($(BUILD),$(notdir $(CURDIR)))\r\n#---------------------------------------------------------------------------------\r\n\r\nexport OUTPUT\t:=\t$(CURDIR)/$(TARGET)\r\nexport TOPDIR\t:=\t$(CURDIR)\r\n\r\nexport VPATH\t:=\t$(foreach dir,$(SOURCES),$(CURDIR)/$(dir)) \\\r\n\t\t\t$(foreach dir,$(DATA),$(CURDIR)/$(dir))\r\n\r\nexport DEPSDIR\t:=\t$(CURDIR)/$(BUILD)\r\n\r\nCFILES\t\t:=\t$(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.c)))\r\nCPPFILES\t:=\t$(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.cpp)))\r\nSFILES\t\t:=\t$(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.s)))\r\nPICAFILES\t:=\t$(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.v.pica)))\r\nSHLISTFILES\t:=\t$(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.shlist)))\r\nBINFILES\t:=\t$(foreach dir,$(DATA),$(notdir $(wildcard $(dir)/*.*)))\r\n\r\n#---------------------------------------------------------------------------------\r\n# use CXX for linking C++ projects, CC for standard C\r\n#---------------------------------------------------------------------------------\r\nifeq ($(strip $(CPPFILES)),)\r\n#---------------------------------------------------------------------------------\r\n\texport LD\t:=\t$(CC)\r\n#---------------------------------------------------------------------------------\r\nelse\r\n#---------------------------------------------------------------------------------\r\n\texport LD\t:=\t$(CXX)\r\n#---------------------------------------------------------------------------------\r\nendif\r\n#---------------------------------------------------------------------------------\r\n\r\nexport OFILES_SOURCES \t:=\t$(CPPFILES:.cpp=.o) $(CFILES:.c=.o) $(SFILES:.s=.o)\r\n\r\nexport OFILES_BIN\t:=\t$(addsuffix .o,$(BINFILES)) \\\r\n\t\t\t$(PICAFILES:.v.pica=.shbin.o) $(SHLISTFILES:.shlist=.shbin.o)\r\n\r\nexport OFILES := $(OFILES_BIN) $(OFILES_SOURCES)\r\n\r\nexport HFILES\t:=\t$(PICAFILES:.v.pica=_shbin.h) $(SHLISTFILES:.shlist=_shbin.h) $(addsuffix .h,$(subst .,_,$(BINFILES)))\r\n\r\nexport INCLUDE\t:=\t$(foreach dir,$(INCLUDES),-I$(CURDIR)/$(dir)) \\\r\n\t\t\t$(foreach dir,$(LIBDIRS),-I$(dir)/include) \\\r\n\t\t\t-I$(CURDIR)/$(BUILD)\r\n\r\nexport LIBPATHS\t:=\t$(foreach dir,$(LIBDIRS),-L$(dir)/lib)\r\n\r\nifeq ($(strip $(ICON)),)\r\n\ticons := $(wildcard *.png)\r\n\tifneq (,$(findstring $(TARGET).png,$(icons)))\r\n\t\texport APP_ICON := $(TOPDIR)/assets/$(TARGET).png\r\n\telse\r\n\t\tifneq (,$(findstring icon.png,$(icons)))\r\n\t\t\texport APP_ICON := $(TOPDIR)/assets/icon.png\r\n\t\tendif\r\n\tendif\r\nelse\r\n\texport APP_ICON := $(TOPDIR)/assets/$(ICON)\r\nendif\r\n\r\nifeq ($(strip $(NO_SMDH)),)\r\n\texport _3DSXFLAGS += --smdh=$(CURDIR)/$(TARGET).smdh\r\nendif\r\n\r\nifneq ($(ROMFS),)\r\n\texport _3DSXFLAGS += --romfs=$(CURDIR)/$(ROMFS)\r\nendif\r\n\r\n.PHONY: $(BUILD) clean all\r\n\r\n#---------------------------------------------------------------------------------\r\nall: $(BUILD)\r\n\r\n$(BUILD):\r\n\t@[ -d $@ ] || mkdir -p $@\r\n\t@$(MAKE) --no-print-directory -C $(BUILD) -f $(CURDIR)/Makefile\r\n\r\n#---------------------------------------------------------------------------------\r\nclean:\r\n\t@echo clean ...\r\n\t@rm -fr $(BUILD) $(TARGET).3dsx $(TARGET).elf\r\n\t\r\n#---------------------------------------------------------------------------------\r\n$(TARGET)-strip.elf: $(BUILD)\r\n\t@$(STRIP) $(TARGET).elf -o $(TARGET)-strip.elf\r\n#---------------------------------------------------------------------------------\r\ncci: $(TARGET)-strip.elf\r\n\t@makerom -f cci -rsf $(CURDIR)/$(TARGET).rsf -target d -exefslogo -elf $(TARGET)-strip.elf -o $(TARGET).3ds\r\n#---------------------------------------------------------------------------------\r\ncia: $(TARGET)-strip.elf\r\n\t@$(CURDIR)/tools/makerom32.exe -f cia -o $(TARGET).cia -elf $(TARGET)-strip.elf -rsf $(CURDIR)/$(TARGET).rsf -exefslogo -target t\r\n#---------------------------------------------------------------------------------\r\nnetlink: $(BUILD)\r\n\t$(DEVKITARM)/bin/3dslink.exe --address 192.168.50.231 $(TARGET).3dsx\r\n#---------------------------------------------------------------------------------\r\ncitra: $(BUILD)\r\n\t$(CURDIR)/tools/citra/citra.exe $(TARGET).3dsx\r\n#---------------------------------------------------------------------------------\r\ncitra-qt: $(BUILD)\r\n\t$(CURDIR)/tools/citra/citra-qt.exe $(TARGET).3dsx\r\n#---------------------------------------------------------------------------------\r\nelse\r\n\r\nDEPENDS\t:=\t$(OFILES:.o=.d)\r\n\r\n#---------------------------------------------------------------------------------\r\n# main targets\r\n#---------------------------------------------------------------------------------\r\nifeq ($(strip $(NO_SMDH)),)\r\n$(OUTPUT).3dsx\t:\t$(OUTPUT).elf $(OUTPUT).smdh\r\nelse\r\n$(OUTPUT).3dsx\t:\t$(OUTPUT).elf\r\nendif\r\n\r\n$(OFILES_SOURCES) : $(HFILES)\r\n\r\n$(OUTPUT).elf\t:\t$(OFILES)\r\n\r\n#---------------------------------------------------------------------------------\r\n# you need a rule like this for each extension you use as binary data\r\n#---------------------------------------------------------------------------------\r\n%.bin.o\t%_bin.h :\t%.bin\r\n#---------------------------------------------------------------------------------\r\n\t@echo $(notdir $<)\r\n\t@$(bin2o)\r\n#---------------------------------------------------------------------------------\r\n%.ttf.o\t:\t%.ttf\r\n#---------------------------------------------------------------------------------\r\n\t@echo $(notdir $<)\r\n\t@$(bin2o)\r\n#---------------------------------------------------------------------------------\r\n%.png.o\t:\t%.png\r\n#---------------------------------------------------------------------------------\r\n\t@echo $(notdir $<)\r\n\t@$(bin2o)\r\n#---------------------------------------------------------------------------------\r\n# rules for assembling GPU shaders\r\n#---------------------------------------------------------------------------------\r\ndefine shader-as\r\n\t$(eval CURBIN := $*.shbin)\r\n\t$(eval DEPSFILE := $(DEPSDIR)/$*.shbin.d)\r\n\techo \"$(CURBIN).o: $< $1\" > $(DEPSFILE)\r\n\techo \"extern const u8\" `(echo $(CURBIN) | sed -e 's/^\\([0-9]\\)/_\\1/' | tr . _)`\"_end[];\" > `(echo $(CURBIN) | tr . _)`.h\r\n\techo \"extern const u8\" `(echo $(CURBIN) | sed -e 's/^\\([0-9]\\)/_\\1/' | tr . _)`\"[];\" >> `(echo $(CURBIN) | tr . _)`.h\r\n\techo \"extern const u32\" `(echo $(CURBIN) | sed -e 's/^\\([0-9]\\)/_\\1/' | tr . _)`_size\";\" >> `(echo $(CURBIN) | tr . _)`.h\r\n\tpicasso -o $(CURBIN) $1\r\n\tbin2s $(CURBIN) | $(AS) -o $*.shbin.o\r\nendef\r\n\r\n%.shbin.o %_shbin.h : %.v.pica %.g.pica\r\n\t@echo $(notdir $^)\r\n\t@$(call shader-as,$^)\r\n\r\n%.shbin.o %_shbin.h : %.v.pica\r\n\t@echo $(notdir $<)\r\n\t@$(call shader-as,$<)\r\n\r\n%.shbin.o %_shbin.h : %.shlist\r\n\t@echo $(notdir $<)\r\n\t@$(call shader-as,$(foreach file,$(shell cat $<),$(dir $<)$(file)))\r\n\r\n\r\n-include $(DEPENDS)\r\n\r\n#---------------------------------------------------------------------------------------\r\nendif\r\n#---------------------------------------------------------------------------------------\r\n"
  },
  {
    "path": "PinBox/PinBox/PinBox.rsf",
    "content": "BasicInfo:\r\n  Title                   : PINBOX r0.2.1\r\n  ProductCode             : CTR-P-PINBOX\r\n  ContentType            : Application # Application / SystemUpdate / Manual / Child / Trial\r\n  Logo                    : Nintendo # Nintendo / Licensed / Distributed / iQue / iQueForSystem\r\n\r\nRomFs:\r\n  RootPath: $(APP_ROMFS)\r\n\r\nTitleInfo:\r\n  Category                : Application\r\n  UniqueId                : 0x18EF\r\n\r\nOption:\r\n  UseOnSD                 : true # true if App is to be installed to SD\r\n  FreeProductCode         : true # Removes limitations on ProductCode\r\n  MediaFootPadding        : false # If true CCI files are created with padding\r\n  EnableCrypt             : false #$(APP_ENCRYPTED) # Enables encryption for NCCH and CIA\r\n  EnableCompress          : true # Compresses where applicable (currently only exefs:/.code)\r\n  \r\nAccessControlInfo:\r\n  CoreVersion                   : 2\r\n\r\n  # Exheader Format Version\r\n  DescVersion                   : 2\r\n  \r\n  # Minimum Required Kernel Version (below is for 4.5.0)\r\n  ReleaseKernelMajor            : \"02\"\r\n  ReleaseKernelMinor            : \"33\" \r\n\r\n  # ExtData\r\n  UseExtSaveData                : false # enables ExtData       \r\n  #ExtSaveDataId                : 0x300 # only set this when the ID is different to the UniqueId\r\n\r\n  # FS:USER Archive Access Permissions\r\n  # Uncomment as required\r\n  FileSystemAccess:\r\n   - CategorySystemApplication\r\n   - CategoryHardwareCheck\r\n   - CategoryFileSystemTool\r\n   - Debug\r\n   - TwlCardBackup\r\n   - TwlNandData\r\n   - Boss\r\n   - DirectSdmc\r\n   - Core\r\n   - CtrNandRo\r\n   - CtrNandRw\r\n   - CtrNandRoWrite\r\n   - CategorySystemSettings\r\n   - CardBoard\r\n   - ExportImportIvs\r\n   - DirectSdmcWrite\r\n   - SwitchCleanup\r\n   - SaveDataMove\r\n   - Shop\r\n   - Shell\r\n   - CategoryHomeMenu\r\n  IoAccessControl:\r\n   - FsMountNand\r\n   - FsMountNandRoWrite\r\n   - FsMountTwln\r\n   - FsMountWnand\r\n   - FsMountCardSpi\r\n   - UseSdif3\r\n   - CreateSeed\r\n   - UseCardSpi\r\n\r\n  # Process Settings\r\n  MemoryType                    : Application # Application/System/Base\r\n  SystemMode                    : $(APP_SYSTEM_MODE) # 64MB(Default)/96MB/80MB/72MB/32MB\r\n  IdealProcessor                : 0\r\n  AffinityMask                  : 1\r\n  Priority                      : 16\r\n  MaxCpu                        : 0x9E # Default\r\n  HandleTableSize               : 0x200\r\n  DisableDebug                  : false\r\n  EnableForceDebug              : false\r\n  CanWriteSharedPage            : true\r\n  CanUsePrivilegedPriority      : false\r\n  CanUseNonAlphabetAndNumber    : true\r\n  PermitMainFunctionArgument    : true\r\n  CanShareDeviceMemory          : true\r\n  RunnableOnSleep               : false\r\n  SpecialMemoryArrange          : true\r\n\r\n  # New3DS Exclusive Process Settings\r\n  SystemModeExt                 : $(APP_SYSTEM_MODE_EXT) # Legacy(Default)/124MB/178MB  Legacy:Use Old3DS SystemMode\r\n  CpuSpeed                      : 804MHz # 256MHz(Default)/804MHz\r\n  EnableL2Cache                 : true # false(default)/true\r\n  CanAccessCore2                : true \r\n\r\n  # Virtual Address Mappings\r\n  IORegisterMapping:\r\n   - 1ff00000-1ff7ffff   # DSP memory\r\n  MemoryMapping: \r\n   - 1f000000-1f5fffff:r # VRAM\r\n\r\n  # Accessible SVCs, <Name>:<ID>\r\n  SystemCallAccess: \r\n    ControlMemory: 1\r\n    QueryMemory: 2\r\n    ExitProcess: 3\r\n    GetProcessAffinityMask: 4\r\n    SetProcessAffinityMask: 5\r\n    GetProcessIdealProcessor: 6\r\n    SetProcessIdealProcessor: 7\r\n    CreateThread: 8\r\n    ExitThread: 9\r\n    SleepThread: 10\r\n    GetThreadPriority: 11\r\n    SetThreadPriority: 12\r\n    GetThreadAffinityMask: 13\r\n    SetThreadAffinityMask: 14\r\n    GetThreadIdealProcessor: 15\r\n    SetThreadIdealProcessor: 16\r\n    GetCurrentProcessorNumber: 17\r\n    Run: 18\r\n    CreateMutex: 19\r\n    ReleaseMutex: 20\r\n    CreateSemaphore: 21\r\n    ReleaseSemaphore: 22\r\n    CreateEvent: 23\r\n    SignalEvent: 24\r\n    ClearEvent: 25\r\n    CreateTimer: 26\r\n    SetTimer: 27\r\n    CancelTimer: 28\r\n    ClearTimer: 29\r\n    CreateMemoryBlock: 30\r\n    MapMemoryBlock: 31\r\n    UnmapMemoryBlock: 32\r\n    CreateAddressArbiter: 33\r\n    ArbitrateAddress: 34\r\n    CloseHandle: 35\r\n    WaitSynchronization1: 36\r\n    WaitSynchronizationN: 37\r\n    SignalAndWait: 38\r\n    DuplicateHandle: 39\r\n    GetSystemTick: 40\r\n    GetHandleInfo: 41\r\n    GetSystemInfo: 42\r\n    GetProcessInfo: 43\r\n    GetThreadInfo: 44\r\n    ConnectToPort: 45\r\n    SendSyncRequest1: 46\r\n    SendSyncRequest2: 47\r\n    SendSyncRequest3: 48\r\n    SendSyncRequest4: 49\r\n    SendSyncRequest: 50\r\n    OpenProcess: 51\r\n    OpenThread: 52\r\n    GetProcessId: 53\r\n    GetProcessIdOfThread: 54\r\n    GetThreadId: 55\r\n    GetResourceLimit: 56\r\n    GetResourceLimitLimitValues: 57\r\n    GetResourceLimitCurrentValues: 58\r\n    GetThreadContext: 59\r\n    Break: 60\r\n    OutputDebugString: 61\r\n    ControlPerformanceCounter: 62\r\n    CreatePort: 71\r\n    CreateSessionToPort: 72\r\n    CreateSession: 73\r\n    AcceptSession: 74\r\n    ReplyAndReceive1: 75\r\n    ReplyAndReceive2: 76\r\n    ReplyAndReceive3: 77\r\n    ReplyAndReceive4: 78\r\n    ReplyAndReceive: 79\r\n    BindInterrupt: 80\r\n    UnbindInterrupt: 81\r\n    InvalidateProcessDataCache: 82\r\n    StoreProcessDataCache: 83\r\n    FlushProcessDataCache: 84\r\n    StartInterProcessDma: 85\r\n    StopDma: 86\r\n    GetDmaState: 87\r\n    RestartDma: 88\r\n    DebugActiveProcess: 96\r\n    BreakDebugProcess: 97\r\n    TerminateDebugProcess: 98\r\n    GetProcessDebugEvent: 99\r\n    ContinueDebugEvent: 100\r\n    GetProcessList: 101\r\n    GetThreadList: 102\r\n    GetDebugThreadContext: 103\r\n    SetDebugThreadContext: 104\r\n    QueryDebugProcessMemory: 105\r\n    ReadProcessMemory: 106\r\n    WriteProcessMemory: 107\r\n    SetHardwareBreakPoint: 108\r\n    GetDebugThreadParam: 109\r\n    ControlProcessMemory: 112\r\n    MapProcessMemory: 113\r\n    UnmapProcessMemory: 114\r\n    CreateCodeSet: 115\r\n    CreateProcess: 117\r\n    TerminateProcess: 118\r\n    SetProcessResourceLimits: 119\r\n    CreateResourceLimit: 120\r\n    SetResourceLimitValues: 121\r\n    AddCodeSegment: 122\r\n    Backdoor: 123\r\n    KernelSetState: 124\r\n    QueryProcessMemory: 125\r\n\r\n  # Service List\r\n  # Maximum 34 services (32 if firmware is prior to 9.6.0)\r\n  ServiceAccessControl:\r\n   - APT:U\r\n   - ac:u\r\n   - am:net\r\n   - boss:U\r\n   - cam:u\r\n   - cecd:u\r\n   - cfg:nor\r\n   - cfg:u\r\n   - csnd:SND\r\n   - dsp::DSP\r\n   - frd:u\r\n   - fs:USER\r\n   - gsp::Gpu\r\n   - gsp::Lcd\r\n   - hid:USER\r\n   - http:C\r\n   - ir:rst\r\n   - ir:u\r\n   - ir:USER\r\n   - mic:u\r\n   - ndm:u\r\n   - news:s\r\n   - nwm::EXT\r\n   - nwm::UDS\r\n   - ptm:sysm\r\n   - ptm:u\r\n   - pxi:dev\r\n   - soc:U\r\n   - ssl:C\r\n   - y2r:u\r\n\r\n\r\nSystemControlInfo:\r\n  SaveDataSize: 0KB # Change if the app uses savedata\r\n  RemasterVersion: 2\r\n  StackSize: 0x40000\r\n\r\n  # Modules that run services listed above should be included below\r\n  # Maximum 48 dependencies\r\n  # <module name>:<module titleid>\r\n  Dependency: \r\n    ac: 0x0004013000002402\r\n    #act: 0x0004013000003802\r\n    am: 0x0004013000001502\r\n    boss: 0x0004013000003402\r\n    camera: 0x0004013000001602\r\n    cecd: 0x0004013000002602\r\n    cfg: 0x0004013000001702\r\n    codec: 0x0004013000001802\r\n    csnd: 0x0004013000002702\r\n    dlp: 0x0004013000002802\r\n    dsp: 0x0004013000001a02\r\n    friends: 0x0004013000003202\r\n    gpio: 0x0004013000001b02\r\n    gsp: 0x0004013000001c02\r\n    hid: 0x0004013000001d02\r\n    http: 0x0004013000002902\r\n    i2c: 0x0004013000001e02\r\n    ir: 0x0004013000003302\r\n    mcu: 0x0004013000001f02\r\n    mic: 0x0004013000002002\r\n    ndm: 0x0004013000002b02\r\n    news: 0x0004013000003502\r\n    #nfc: 0x0004013000004002\r\n    nim: 0x0004013000002c02\r\n    nwm: 0x0004013000002d02\r\n    pdn: 0x0004013000002102\r\n    ps: 0x0004013000003102\r\n    ptm: 0x0004013000002202\r\n    #qtm: 0x0004013020004202\r\n    ro: 0x0004013000003702\r\n    socket: 0x0004013000002e02\r\n    spi: 0x0004013000002302\r\n    ssl: 0x0004013000002f02\r\n"
  },
  {
    "path": "PinBox/PinBox/PinBox.vcxproj",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\r\n<Project DefaultTargets=\"Build\" ToolsVersion=\"14.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\r\n  <ItemGroup Label=\"ProjectConfigurations\">\r\n    <ProjectConfiguration Include=\"Citra-QT|Win32\">\r\n      <Configuration>Citra-QT</Configuration>\r\n      <Platform>Win32</Platform>\r\n    </ProjectConfiguration>\r\n    <ProjectConfiguration Include=\"Citra-QT|x64\">\r\n      <Configuration>Citra-QT</Configuration>\r\n      <Platform>x64</Platform>\r\n    </ProjectConfiguration>\r\n    <ProjectConfiguration Include=\"Citra|Win32\">\r\n      <Configuration>Citra</Configuration>\r\n      <Platform>Win32</Platform>\r\n    </ProjectConfiguration>\r\n    <ProjectConfiguration Include=\"Netlink|Win32\">\r\n      <Configuration>Netlink</Configuration>\r\n      <Platform>Win32</Platform>\r\n    </ProjectConfiguration>\r\n    <ProjectConfiguration Include=\"Citra|x64\">\r\n      <Configuration>Citra</Configuration>\r\n      <Platform>x64</Platform>\r\n    </ProjectConfiguration>\r\n    <ProjectConfiguration Include=\"Netlink|x64\">\r\n      <Configuration>Netlink</Configuration>\r\n      <Platform>x64</Platform>\r\n    </ProjectConfiguration>\r\n    <ProjectConfiguration Include=\"TestClient|Win32\">\r\n      <Configuration>TestClient</Configuration>\r\n      <Platform>Win32</Platform>\r\n    </ProjectConfiguration>\r\n    <ProjectConfiguration Include=\"TestClient|x64\">\r\n      <Configuration>TestClient</Configuration>\r\n      <Platform>x64</Platform>\r\n    </ProjectConfiguration>\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <None Include=\"Makefile\" />\r\n    <None Include=\"source\\vshader.v.pica\" />\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <ClCompile Include=\"source\\ConfigManager.cpp\" />\r\n    <ClCompile Include=\"source\\easing.cpp\" />\r\n    <ClCompile Include=\"source\\lodepng.cpp\" />\r\n    <ClCompile Include=\"source\\main.cpp\" />\r\n    <ClCompile Include=\"source\\Mutex.cpp\" />\r\n    <ClCompile Include=\"source\\PPAudio.cpp\" />\r\n    <ClCompile Include=\"source\\PPDecoder.cpp\" />\r\n    <ClCompile Include=\"source\\PPGraphics.cpp\" />\r\n    <ClCompile Include=\"source\\PPMessage.cpp\" />\r\n    <ClCompile Include=\"source\\PPSession.cpp\" />\r\n    <ClCompile Include=\"source\\PPSessionManager.cpp\" />\r\n    <ClCompile Include=\"source\\PPUI.cpp\" />\r\n    <ClCompile Include=\"source\\yuv_rgb.c\" />\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <ClInclude Include=\"include\\Anim.h\" />\r\n    <ClInclude Include=\"include\\Color.h\" />\r\n    <ClInclude Include=\"include\\ConfigManager.h\" />\r\n    <ClInclude Include=\"include\\constant.h\" />\r\n    <ClInclude Include=\"include\\easing.h\" />\r\n    <ClInclude Include=\"include\\HubItem.h\" />\r\n    <ClInclude Include=\"include\\lodepng.h\" />\r\n    <ClInclude Include=\"include\\Logger.h\" />\r\n    <ClInclude Include=\"include\\Mutex.h\" />\r\n    <ClInclude Include=\"include\\PPAudio.h\" />\r\n    <ClInclude Include=\"include\\PPDecoder.h\" />\r\n    <ClInclude Include=\"include\\PPGraphics.h\" />\r\n    <ClInclude Include=\"include\\PPMessage.h\" />\r\n    <ClInclude Include=\"include\\PPSession.h\" />\r\n    <ClInclude Include=\"include\\PPSessionManager.h\" />\r\n    <ClInclude Include=\"include\\PPUI.h\" />\r\n    <ClInclude Include=\"include\\yuv_rgb.h\" />\r\n  </ItemGroup>\r\n  <PropertyGroup Label=\"Globals\">\r\n    <ProjectGuid>{7C85BF63-9A1F-43F0-9560-6E51BA537C01}</ProjectGuid>\r\n    <Keyword>MakeFileProj</Keyword>\r\n    <WindowsTargetPlatformVersion>8.1</WindowsTargetPlatformVersion>\r\n  </PropertyGroup>\r\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.Default.props\" />\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Citra|Win32'\" Label=\"Configuration\">\r\n    <ConfigurationType>Makefile</ConfigurationType>\r\n    <UseDebugLibraries>true</UseDebugLibraries>\r\n    <PlatformToolset>v140</PlatformToolset>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='TestClient|Win32'\" Label=\"Configuration\">\r\n    <ConfigurationType>Application</ConfigurationType>\r\n    <UseDebugLibraries>true</UseDebugLibraries>\r\n    <PlatformToolset>v140</PlatformToolset>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Netlink|Win32'\" Label=\"Configuration\">\r\n    <ConfigurationType>Makefile</ConfigurationType>\r\n    <UseDebugLibraries>false</UseDebugLibraries>\r\n    <PlatformToolset>v140</PlatformToolset>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Citra|x64'\" Label=\"Configuration\">\r\n    <ConfigurationType>Application</ConfigurationType>\r\n    <UseDebugLibraries>true</UseDebugLibraries>\r\n    <PlatformToolset>v140</PlatformToolset>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='TestClient|x64'\" Label=\"Configuration\">\r\n    <ConfigurationType>Application</ConfigurationType>\r\n    <UseDebugLibraries>true</UseDebugLibraries>\r\n    <PlatformToolset>v140</PlatformToolset>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Netlink|x64'\" Label=\"Configuration\">\r\n    <ConfigurationType>Application</ConfigurationType>\r\n    <UseDebugLibraries>false</UseDebugLibraries>\r\n    <PlatformToolset>v140</PlatformToolset>\r\n  </PropertyGroup>\r\n  <PropertyGroup Label=\"Configuration\" Condition=\"'$(Configuration)|$(Platform)'=='Citra-QT|Win32'\">\r\n    <PlatformToolset>v140</PlatformToolset>\r\n    <ConfigurationType>Makefile</ConfigurationType>\r\n  </PropertyGroup>\r\n  <PropertyGroup Label=\"Configuration\" Condition=\"'$(Configuration)|$(Platform)'=='Citra-QT|x64'\">\r\n    <PlatformToolset>v140</PlatformToolset>\r\n  </PropertyGroup>\r\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.props\" />\r\n  <ImportGroup Label=\"ExtensionSettings\">\r\n  </ImportGroup>\r\n  <ImportGroup Label=\"Shared\">\r\n  </ImportGroup>\r\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Citra|Win32'\">\r\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\r\n  </ImportGroup>\r\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='TestClient|Win32'\" Label=\"PropertySheets\">\r\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\r\n  </ImportGroup>\r\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Netlink|Win32'\">\r\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\r\n  </ImportGroup>\r\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Citra|x64'\">\r\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\r\n  </ImportGroup>\r\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='TestClient|x64'\" Label=\"PropertySheets\">\r\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\r\n  </ImportGroup>\r\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Netlink|x64'\">\r\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\r\n  </ImportGroup>\r\n  <PropertyGroup Label=\"UserMacros\" />\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Citra|Win32'\">\r\n    <NMakeOutput>\r\n    </NMakeOutput>\r\n    <NMakePreprocessorDefinitions>USE_CITRA;CITRA;_DEBUG;$(NMakePreprocessorDefinitions)</NMakePreprocessorDefinitions>\r\n    <IncludePath>C:\\devkitPro\\devkitARM\\arm-none-eabi\\include;C:\\devkitPro\\libctru\\include;C:\\devkitPro\\libwebp\\include;C:\\devkitPro\\libimgui\\include;include;build;C:\\devkitPro\\portlibs\\armv6k\\include;$(IncludePath)</IncludePath>\r\n    <NMakeBuildCommandLine>make &amp; start make citra</NMakeBuildCommandLine>\r\n    <NMakeReBuildCommandLine>make clean all</NMakeReBuildCommandLine>\r\n    <NMakeCleanCommandLine>make clean</NMakeCleanCommandLine>\r\n    <NMakeIncludeSearchPath>C:\\devkitPro\\libctru\\include;C:\\devkitPro\\devkitARM\\arm-none-eabi\\include;include;C:\\devkitPro\\libwebp\\include;build;C:\\devkitPro\\libimgui\\include;C:\\devkitPro\\portlibs\\armv6k\\include;$(NMakeIncludeSearchPath)</NMakeIncludeSearchPath>\r\n    <OutDir>..\\tmp\\out\\$(Configuration)\\</OutDir>\r\n    <IntDir>..\\tmp\\intermediate\\$(Configuration)\\</IntDir>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='TestClient|Win32'\">\r\n    <NMakeOutput />\r\n    <NMakePreprocessorDefinitions>WIN32;_DEBUG;$(NMakePreprocessorDefinitions)</NMakePreprocessorDefinitions>\r\n    <IncludePath>C:\\devkitPro\\devkitARM\\arm-none-eabi\\include;C:\\devkitPro\\libctru\\include;C:\\devkitPro\\libwebp\\include;C:\\devkitPro\\libimgui\\include;include;build;$(IncludePath)</IncludePath>\r\n    <NMakeBuildCommandLine>\r\n    </NMakeBuildCommandLine>\r\n    <NMakeReBuildCommandLine>\r\n    </NMakeReBuildCommandLine>\r\n    <NMakeCleanCommandLine>\r\n    </NMakeCleanCommandLine>\r\n    <NMakeIncludeSearchPath>C:\\devkitPro\\libctru\\include;C:\\devkitPro\\devkitARM\\arm-none-eabi\\include;include;C:\\devkitPro\\libwebp\\include;build;C:\\devkitPro\\libimgui\\include;$(NMakeIncludeSearchPath)</NMakeIncludeSearchPath>\r\n    <OutDir>..\\tmp\\out\\$(Configuration)\\</OutDir>\r\n    <IntDir>..\\tmp\\intermediate\\$(Configuration)\\</IntDir>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Netlink|Win32'\">\r\n    <NMakeOutput>\r\n    </NMakeOutput>\r\n    <NMakePreprocessorDefinitions>$(NMakePreprocessorDefinitions)</NMakePreprocessorDefinitions>\r\n    <IncludePath>C:\\devkitPro\\devkitARM\\arm-none-eabi\\include;C:\\devkitPro\\libctru\\include;C:\\devkitPro\\libwebp\\include;C:\\devkitPro\\libimgui\\include;include;build;C:\\devkitPro\\portlibs\\armv6k\\include;$(IncludePath)</IncludePath>\r\n    <NMakeBuildCommandLine>make &amp; start make netlink</NMakeBuildCommandLine>\r\n    <NMakeReBuildCommandLine>make clean all</NMakeReBuildCommandLine>\r\n    <NMakeCleanCommandLine>make clean</NMakeCleanCommandLine>\r\n    <NMakeIncludeSearchPath>C:\\devkitPro\\libctru\\include;C:\\devkitPro\\devkitARM\\arm-none-eabi\\include;C:\\devkitPro\\libwebp\\include;include;build;$(NMakeIncludeSearchPath)</NMakeIncludeSearchPath>\r\n    <OutDir>..\\tmp\\out\\$(Configuration)\\</OutDir>\r\n    <IntDir>..\\tmp\\intermediate\\$(Configuration)\\</IntDir>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Citra-QT|Win32'\">\r\n    <IncludePath>C:\\devkitPro\\devkitARM\\arm-none-eabi\\include;C:\\devkitPro\\libctru\\include;C:\\devkitPro\\libwebp\\include;C:\\devkitPro\\libimgui\\include;include;build;C:\\devkitPro\\portlibs\\armv6k\\include;$(IncludePath)</IncludePath>\r\n    <NMakePreprocessorDefinitions>USE_CITRA;CITRA;_DEBUG;$(NMakePreprocessorDefinitions)</NMakePreprocessorDefinitions>\r\n    <NMakeIncludeSearchPath>C:\\devkitPro\\libctru\\include;C:\\devkitPro\\devkitARM\\arm-none-eabi\\include;include;C:\\devkitPro\\libwebp\\include;build;C:\\devkitPro\\libimgui\\include;C:\\devkitPro\\portlibs\\armv6k\\include;$(NMakeIncludeSearchPath)</NMakeIncludeSearchPath>\r\n    <NMakeBuildCommandLine>make &amp; start make citra-qt</NMakeBuildCommandLine>\r\n    <NMakeReBuildCommandLine>make clean all</NMakeReBuildCommandLine>\r\n    <NMakeCleanCommandLine>make clean</NMakeCleanCommandLine>\r\n  </PropertyGroup>\r\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Citra|Win32'\">\r\n    <BuildLog>\r\n      <Path />\r\n    </BuildLog>\r\n  </ItemDefinitionGroup>\r\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='TestClient|Win32'\">\r\n    <BuildLog>\r\n      <Path>\r\n      </Path>\r\n    </BuildLog>\r\n  </ItemDefinitionGroup>\r\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Netlink|Win32'\">\r\n    <BuildLog>\r\n      <Path />\r\n    </BuildLog>\r\n  </ItemDefinitionGroup>\r\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.targets\" />\r\n  <ImportGroup Label=\"ExtensionTargets\">\r\n  </ImportGroup>\r\n</Project>"
  },
  {
    "path": "PinBox/PinBox/PinBox.vcxproj.filters",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\r\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\r\n  <ItemGroup>\r\n    <Filter Include=\"Source Files\">\r\n      <UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>\r\n      <Extensions>cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx</Extensions>\r\n    </Filter>\r\n    <Filter Include=\"Header Files\">\r\n      <UniqueIdentifier>{93995380-89BD-4b04-88EB-625FBE52EBFB}</UniqueIdentifier>\r\n      <Extensions>h;hh;hpp;hxx;hm;inl;inc;xsd</Extensions>\r\n    </Filter>\r\n    <Filter Include=\"Resource Files\">\r\n      <UniqueIdentifier>{67DA6AB6-F800-4c08-8B7A-83BB121AAD01}</UniqueIdentifier>\r\n      <Extensions>rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms</Extensions>\r\n    </Filter>\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <None Include=\"source\\vshader.v.pica\">\r\n      <Filter>Source Files</Filter>\r\n    </None>\r\n    <None Include=\"Makefile\">\r\n      <Filter>Resource Files</Filter>\r\n    </None>\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <ClCompile Include=\"source\\main.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"source\\PPGraphics.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"source\\PPSession.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"source\\PPUI.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"source\\PPMessage.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"source\\Mutex.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"source\\PPSessionManager.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"source\\ConfigManager.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"source\\PPDecoder.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"source\\yuv_rgb.c\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"source\\PPAudio.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"source\\easing.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"source\\lodepng.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <ClInclude Include=\"include\\PPGraphics.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\PPSession.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\PPUI.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\PPMessage.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\Mutex.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\PPSessionManager.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\ConfigManager.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\PPDecoder.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\yuv_rgb.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\PPAudio.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\constant.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\easing.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\Color.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\Anim.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\Logger.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\HubItem.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"include\\lodepng.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n  </ItemGroup>\r\n</Project>"
  },
  {
    "path": "PinBox/PinBox/build-cia.bat",
    "content": "start make clean & make all & make PinBox-strip.elf & make cia"
  },
  {
    "path": "PinBox/PinBox/create-smdh.bat",
    "content": "start smdhtool --create \"PinBox\" \"3DS Client - PC Desktop Streaming\" \"Namkazt\" \"E:\\3ds\\PinBoxStreaming\\PinBox\\PinBox\\assets\\icon.png\" Pinbox.smdh"
  },
  {
    "path": "PinBox/PinBox/include/Anim.h",
    "content": "#ifndef _PP_ANIM_H_\n#define _PP_ANIM_H_\n\n#include <easing.h>\n#include <3ds/services/am.h>\n#include <3ds/ndsp/ndsp.h>\n#include <cstdlib>\n\n#define TIME_MILISECOND 1Ull\n#define TIME_SECOND 1000Ull\n#define TIME_MINUTE 3600000Ull\n#define TIME_HOUR 216000000Ull\n\nclass Anim {\n\tstruct AnimInfo\n\t{\n\t\tu32 idx;\n\t\tu64 startTime;\n\t\tu64 duration;\n\t\tfloat from;\n\t\tfloat to;\n\t\teasing_functions func;\n\t};\n};\n\n\n#endif"
  },
  {
    "path": "PinBox/PinBox/include/Color.h",
    "content": "#ifndef _PP_COLOR_H_\n#define _PP_COLOR_H_\n\n#include <functional>\n\n#define M_MIN(a,b) (((a) < (b)) ? (a) : (b))\n#define M_MAX(a,b) (((a) > (b)) ? (a) : (b))\n\n#define M_MIN3(a,b,c) (((a) < (b)) ? (((a) < (c)) ? (a) : (c)) : (((b) < (c)) ? (b) : (c)))\n#define M_MAX3(a,b,c) (((a) > (b)) ? (((a) > (c)) ? (a) : (c)) : (((b) > (c)) ? (b) : (c)))\n\n//TODO: port this\n// https://github.com/Qix-/color-convert/blob/master/conversions.js\n// and this https://github.com/Qix-/color/blob/master/index.js#L298\n\nstruct Color;\n#define RGB(r,g,b) Color{r, g, b, 255}\n#define RGBA(r,g,b,a) Color{r, g, b, a}\n#define TRANSPARENT Color{0, 0, 0, 0}\n\ntypedef struct Color\n{\n\tColor() : r(255), g(255), b(255), a(255) {};\n\tColor(uint8_t _r, uint8_t _g, uint8_t _b, uint8_t _a) : r(_r), g(_g), b(_b), a(_a) {};\n\tuint8_t r, g, b, a;\n\tfloat h, l, s, v;\n\tu32 toU32() const { return (((a) & 0xFF) << 24) | (((b) & 0xFF) << 16) | (((g) & 0xFF) << 8) | (((r) & 0xFF) << 0); }\n\t// type conversions\n\tinline void rgb2hsl()\n\t{\n\t\tfloat _r = r / 255.f, _g = g / 255.f, _b = b / 255.f;\n\t\tfloat vMin = M_MIN3(r, g, b);\n\t\tfloat vMax = M_MAX3(r, g, b);\n\t\tfloat delta = vMax - vMin;\n\t\tif (vMax == vMin) h = 0;\n\t\telse if (_r == vMax) h = (_g - _b) / delta;\n\t\telse if (_g == vMax) h = 2 + (_b - _r) / delta;\n\t\telse if (_b == vMax) h = 4 + (_r - _g) / delta;\n\t\th = M_MIN(h * 60.f, 36.f);\n\t\tif (h < 0) h += 360.f;\n\t\tl = (vMin + vMax) / 2.0f;\n\t\tif (vMax == vMin) s = 0;\n\t\telse if (l <= 0.5f) s = delta / (vMax + vMin);\n\t\telse s = delta / (2 - vMax - vMin);\n\t}\n\tinline void hsl2rgb()\n\t{\n\t\tfloat _h = h / 360.f, _s = s / 360.f, _l = l / 360.f;\n\t\tfloat t1 = 0, t2 = 0, t3 = 0, val = 0;\n\t\tif (_s == 0) {\n\t\t\tval = _l * 255.f;\n\t\t\tr = val, g = val, b = val;\n\t\t}else\n\t\t{\n\t\t\tif (_l < 0.5f) t2 = _l * (1 + _s);\n\t\t\telse t2 = _l + _s - _l * _s;\n\t\t\tt1 = 2 * _l - t2;\n\t\t\tstd::function<float(int i)> get = std::function<float(int i)>([&](int i) {\n\t\t\t\tt3 = _h + 1.f / 3.f * -(i - 1);\n\t\t\t\tif (t3 < 0) t3++;\n\t\t\t\tif (t3 > 1) t3--;\n\t\t\t\tif (6.f * t3 < 1) val = t1 + (t2 - t1) * 6.f * t3;\n\t\t\t\telse if (2.f * t3 < 1.f) val = t2;\n\t\t\t\telse if (3.f * t3 < 2.f) val = t1 + (t2 - t1) * (2.f / 3.f - t3) * 6.f;\n\t\t\t\telse val = t1;\n\t\t\t\treturn val * 255.f;\n\t\t\t});\n\t\t\tr = get(0);\n\t\t\tg = get(1);\n\t\t\tb = get(2);\n\t\t}\n\t}\n\tinline void rgb2hsv()\n\t{\n\t\tfloat rd, gd, bd;\n\t\tfloat _r = r / 255.f, _g = g / 255.f, _b = b / 255.f;\n\t\tv = M_MAX3(_r, _g, _b);\n\t\tfloat diff = v - M_MIN3(_r, _g, _b);\n\t\tstd::function<float(float c)> diffc = std::function<float(float c)>([&](float c) {\n\t\t\treturn (v - c) / 6.f / diff + 1.f / 2.f;\n\t\t});\n\t\tif (diff == 0.f) h = s = 0.f;\n\t\telse\n\t\t{\n\t\t\ts = diff / v;\n\t\t\trd = diffc(_r);\n\t\t\tgd = diffc(_g);\n\t\t\tbd = diffc(_b);\n\t\t\tif (_r == v) h = bd - gd;\n\t\t\telse if (_g == v) h = (1.f / 3.f) + rd - bd;\n\t\t\telse if (_b == v) h = (2.f / 3.f) + gd - rd;\n\t\t\tif (h < 0) h += 1;\n\t\t\telse if (h > 1) h -= 1;\n\t\t}\n\t\th *= 360; s *= 100; v *= 100;\n\t}\n\tinline void hsv2rgb()\n\t{\n\t\tfloat _h = h / 60.f, _s = s / 100.f, _v = v / 100.f;\n\t\tint hi = (int)floor(_h) % 6;\n\t\tuint8_t f = _h - floor(_h);\n\t\tuint8_t p = 255.f * _v * (1.f - _s);\n\t\tuint8_t q = 255.f * _v * (1.f - (_s * f));\n\t\tuint8_t t = 255.f * _v * (1 - (_s * (1 - f)));\n\t\t_v *= 255.f;\n\t\tswitch (hi) {\n\t\tcase 0: r = _v; g = t; b = p; break;\n\t\tcase 1: r = q; g = _v; b = p; break;\n\t\tcase 2: r = p; g = _v; b = t; break;\n\t\tcase 3: r = p; g = q; b = _v; break;\n\t\tcase 4: r = t; g = p; b = _v; break;\n\t\tcase 5: r = _v; g = p; b = q; break;\n\t\t}\n\t}\n\n#define WARP_HSL(func)  rgb2hsl(); func; hsl2rgb();\n#define WARP_HSV(func)  rgb2hsv(); func; hsv2rgb();\n\n\t// funcs\n\tinline float luminosity()\n\t{\n\t\tfloat lr = ((float)r / 255.0f); lr = lr <= 0.03928f ? lr / 12.92f : powf((lr + 0.055f) / 1.055f, 2.4f);\n\t\tfloat lg = ((float)g / 255.0f); lg = lg <= 0.03928f ? lg / 12.92f : powf((lg + 0.055f) / 1.055f, 2.4f);\n\t\tfloat lb = ((float)b / 255.0f); lb = lb <= 0.03928f ? lb / 12.92f : powf((lb + 0.055f) / 1.055f, 2.4f);\n\t\treturn  0.2126 * lr + 0.7152 * lg + 0.0722 * lb;\n\t}\n\tinline float contrast(Color c)\n\t{\n\t\tfloat l1 = luminosity(), l2 = c.luminosity();\n\t\treturn l1 > l2 ? (l1 + 0.05f) / (l2 + 0.05f) : (l2 + 0.05f) / (l1 + 0.05f);\n\t}\n\tinline int level(Color c)\n\t{\n\t\tfloat cRatio = contrast(c);\n\t\tif (cRatio >= 7.1f) return 3;\n\t\treturn (cRatio >= 4.5) ? 2 : 1;\n\t}\n\tinline bool isDark() { return (float)(r * 299 + g * 587 + b * 114) / 1000.0f < 128.0f; }\n\tinline bool isLight() { return !isDark(); }\n\tinline Color negate() { return RGB(255-r, 255-g, 255-b); }\n\tinline void lighten(float ratio) { WARP_HSL(l += l * ratio) }\n\tinline void darken(float ratio) { WARP_HSL(l -= l * ratio) }\n\tinline void saturate(float ratio) { WARP_HSL(s += s * ratio) }\n\tinline void desaturate(float ratio) { WARP_HSL(s -= s * ratio) }\n\tinline Color grayscale() { float v = r * 0.3f + g * 0.59f + b * 0.11f; return RGB(v, v, v); }\n\tinline void rorate(float degrees) { WARP_HSL( h = (int)(h + degrees) % 360; h = h < 0 ? 360 + h : h; ) }\n\tinline Color mix(Color mixin, float weight)\n\t{\n\t\tfloat w = 2.f * weight - 1;\n\t\tuint8_t _a = a - mixin.a;\n\t\tfloat w1 = (((w * _a == -1) ? w : (w + _a) / (1.f + w * _a)) + 1.f) / 2.f;\n\t\tfloat w2 = 1 - w1;\n\t\treturn RGBA(w1 * r + w2 * mixin.r, w1 * g + w2 * mixin.g, w1 * b + w2 * mixin.b, a * weight + mixin.a * (1 - weight));\n\t}\n};\n\n\n#endif"
  },
  {
    "path": "PinBox/PinBox/include/ConfigManager.h",
    "content": "#pragma once\r\n#ifndef _CONFIG_MANAGER_H_\r\n#define _CONFIG_MANAGER_H_\r\n\r\n#include <3ds.h>\r\n#include \"libconfig.h\"\r\n#include <vector>\r\n#include <string>\r\n\r\n#define FORCE_OVERRIDE_VERSION 2\r\n\r\ntypedef struct ServerConfig {\r\n\tstd::string ip;\r\n\tstd::string port;\r\n\tstd::string name;\r\n};\r\n\r\nclass ConfigManager\r\n{\r\nprivate:\r\n\tconfig_t _config;\r\n\tbool shouldCreateNewConfigFile();\r\n\tvoid createNewConfigFile();\r\n\tvoid loadConfigFile();\r\n\r\npublic:\r\n\r\n\tServerConfig* activateServer;\r\n\r\n\tstd::vector<ServerConfig> servers;\r\n\tint lastUsingServer = -1;\r\n\r\n\tint videoBitRate;\r\n\tint videoGOP;\r\n\tint videoMaxBFrames;\r\n\r\n\tint audioBitRate;\r\n\r\n\tbool waitForSync;\r\npublic:\r\n\tstatic ConfigManager* Get();\r\n\tConfigManager();\r\n\r\n\tvoid InitConfig();\r\n\tvoid Save();\r\n\tvoid Destroy();\r\n};\r\n\r\n\r\n#endif"
  },
  {
    "path": "PinBox/PinBox/include/HubItem.h",
    "content": "#ifndef _PP_HUB_TITEM_H_\n#define _PP_HUB_TITEM_H_\n#include <3ds.h>\n\nenum HubItemType\n{\n\tHUB_SCREEN = 0x0,\n\tHUB_APP,\n\tHUB_MOVIE,\n};\n\nclass HubItem\n{\npublic:\n\tstd::string\t\t\t\tuuid;\n\n\t// app name\n\tstd::string\t\t\t\tname;\n\n\t// thumb image should be 64x64 png image\n\tu8*\t\t\t\t\t\tthumbBuf;\n\tu32\t\t\t\t\t\tthumbSize;\n\n\tHubItemType\t\t\t\ttype;\n};\n\n#endif"
  },
  {
    "path": "PinBox/PinBox/include/Logger.h",
    "content": "#ifndef _PP_LOGGER_H_\n#define _PP_LOGGER_H_\n#include <cstdio>\n\n\nclass Logger\n{\npublic:\n};\n\n#endif"
  },
  {
    "path": "PinBox/PinBox/include/Mutex.h",
    "content": "#pragma once\r\n#include <3ds.h>\r\n#include <3ds/svc.h>\r\n\r\nclass Mutex\r\n{\r\nprivate:\r\n\tLightLock\t\t\t\t\t\t\t_handler;\r\n\tbool\t\t\t\t\t\t\t\t_isLocked = false;\r\n\r\npublic:\r\n\tMutex();\r\n\t~Mutex();\r\n\r\n\tvoid Lock();\r\n\tvoid TryLock();\r\n\tvoid Unlock();\r\n};\r\n\r\n"
  },
  {
    "path": "PinBox/PinBox/include/PPAudio.h",
    "content": "#pragma once\n#ifndef _PP_AUDIO_H_\n#define _PP_AUDIO_H_\n\n#include <3ds.h>\n#include <libavutil/frame.h>\n#include \"Mutex.h\"\n\n#define MAX_AUDIO_BUF 2\n\n\nclass PPAudio\n{\nprivate:\n\tbool\t\t\t\t\t\t_initialized = false;\n\tndspWaveBuf\t\t\t\t\t_waveBuf[MAX_AUDIO_BUF];\n\tint\t\t\t\t\t\t\t_nextBuf = 0;\n\npublic:\n\t~PPAudio();\n\tstatic PPAudio* Get();\n\n\tvoid AudioInit();\n\tvoid AudioExit();\n\n\n\tvoid FillBuffer(u8* buffer, u32 size);\n\n\n};\n\n#endif"
  },
  {
    "path": "PinBox/PinBox/include/PPDecoder.h",
    "content": "#pragma once\n#include <3ds.h>\n#include \"Mutex.h\"\n#include \"yuv_rgb.h\"\n#include <3ds/services/y2r.h>\n//ffmpeg\nextern \"C\" {\n#include <libavutil/imgutils.h>\n#include <libavutil/samplefmt.h>\n#include <libavutil/timestamp.h>\n#include <libavformat/avformat.h>\n#include <libavformat/avio.h>\n#include <libavutil/file.h>\n#include <libavutil/opt.h>\n#include <libswscale/swscale.h>\n#include <libswresample/swresample.h>\n}\n\n#define MEMORY_BUFFER_SIZE 0x1400000\n#define MEMORY_BUFFER_PADDING 0x04\n\ntypedef struct MemoryBuffer {\n\tu8* pBufferAddr;\n\tu32 iCursor;\n\tu32 iSize;\n\tu32 iMaxSize;\n\tMutex* pMutex;\n\n\tvoid write(u8* buf, u32 size)\n\t{\n\t\tif (iSize < size) return;\n\t\tpMutex->Lock();\n\t\tmemcpy(pBufferAddr + iCursor, buf, size);\n\t\tiCursor += size;\n\t\tiSize -= size;\n\t\tpMutex->Unlock();\n\t}\n\n\tint read(u8* buf, u32 size)\n\t{\n\t\tif (iCursor == 0) return -1;\n\t\tpMutex->Lock();\n\t\tint ret = FFMIN(iCursor, size);\n\t\tmemcpy(buf, pBufferAddr, ret);\n\t\tiSize += ret;\n\t\tiCursor -= ret;\n\t\tpMutex->Unlock();\n\t\treturn ret;\n\t}\n}MemoryBuffer;\n\ntypedef struct DecodeState{\n\tY2RU_ConversionParams y2rParams;\n\tHandle endEvent;\n}DecodeState;\n\nclass PPDecoder\n{\nprivate:\n\n\t// video stream\n\t//AVCodecParserContext*\t\tpVideoParser;\n\tAVCodecContext*\t\t\t\tpVideoContext;\n\tAVPacket*\t\t\t\t\tpVideoPacket;\n\tAVFrame*\t\t\t\t\tpVideoFrame;\n\tDecodeState*\t\t\t\tpDecodeState;\n\tu8*\t\t\t\t\t\t\tdecodeVideoStream();\n\tvoid\t\t\t\t\t\tinitY2RImageConverter();\n\tvoid\t\t\t\t\t\tconvertColor();\n\n\t// audio stream\n\tAVCodecContext*\t\t\t\tpAudioContext;\n\tAVPacket*\t\t\t\t\tpAudioPacket;\n\tAVFrame*\t\t\t\t\tpAudioFrame;\n\npublic:\n\tPPDecoder();\n\t~PPDecoder();\n\n\tu32 iFrameWidth;\n\tu32 iFrameHeight;\n\n\tvoid initDecoder();\n\tvoid releaseDecoder();\n\n\tu8* appendVideoBuffer(u8* buffer, u32 size);\n\tvoid decodeAudioStream(u8* buffer, u32 size);\n};"
  },
  {
    "path": "PinBox/PinBox/include/PPGraphics.h",
    "content": "#pragma once\r\n#ifndef _PP_GRAPHICS_H_\r\n#define _PP_GRAPHICS_H_\r\n\r\n#include <3ds.h>\r\n#include <3ds/gfx.h>\r\n#include <citro3d.h>\r\n#include \"Color.h\"\r\n#include <map>\r\n\r\n//=========================================================================================\r\n// Const\r\n//=========================================================================================\r\n\r\n#define CLEAR_COLOR 0x000000FF\r\n\r\n#define DISPLAY_TRANSFER_FLAGS \\\r\n\t(GX_TRANSFER_FLIP_VERT(0) | GX_TRANSFER_OUT_TILED(0) | GX_TRANSFER_RAW_COPY(0) | \\\r\n\tGX_TRANSFER_IN_FORMAT(GX_TRANSFER_FMT_RGBA8) | GX_TRANSFER_OUT_FORMAT(GX_TRANSFER_FMT_RGB8) | \\\r\n\tGX_TRANSFER_SCALING(GX_TRANSFER_SCALE_NO))\r\n\r\n// Used to convert textures to 3DS tiled format\r\n// Note: vertical flip flag set so 0,0 is top left of texture\r\n#define TEXTURE_TRANSFER_FLAGS \\\r\n\t(GX_TRANSFER_FLIP_VERT(1) | GX_TRANSFER_OUT_TILED(1) | GX_TRANSFER_RAW_COPY(0) | \\\r\n\tGX_TRANSFER_IN_FORMAT(GX_TRANSFER_FMT_RGB8) | GX_TRANSFER_OUT_FORMAT(GX_TRANSFER_FMT_RGB8) | \\\r\n\tGX_TRANSFER_SCALING(GX_TRANSFER_SCALE_NO))\r\n\r\n#define TEXTURE_RGBA_TRANSFER_FLAGS \\\r\n\t(GX_TRANSFER_FLIP_VERT(1) | GX_TRANSFER_OUT_TILED(1) | GX_TRANSFER_RAW_COPY(0) | \\\r\n\tGX_TRANSFER_IN_FORMAT(GX_TRANSFER_FMT_RGBA8) | GX_TRANSFER_OUT_FORMAT(GX_TRANSFER_FMT_RGBA8) | \\\r\n\tGX_TRANSFER_SCALING(GX_TRANSFER_SCALE_NO))\r\n\r\n#define TEX_GRAPHIC 0\r\n#define TEX_VIDEO_STREAM_LEFT 1\r\n#define TEX_VIDEO_STREAM_RIGHT 2\r\n#define TEX_FONT_GRAPHIC 3\r\n\r\n\r\n//=========================================================================================\r\n// Type\r\n//=========================================================================================\r\ntypedef struct\r\n{\r\n\tfloat x;\r\n\tfloat y;\r\n}Vector2;\r\n\r\ntypedef struct\r\n{\r\n\tfloat x;\r\n\tfloat y;\r\n\tfloat z;\r\n}Vector3;\r\n\r\ntypedef struct\r\n{\r\n\tVector3 position;\r\n\tu32 color;\r\n}VertexPosCol;\r\n\r\ntypedef struct\r\n{\r\n\tVector3 position;\r\n\tVector2 textcoord;\r\n}VertexPosTex;\r\n\r\ntypedef struct \r\n{\r\n\tC3D_Tex tex;\r\n\tu32 width;\r\n\tu32 height;\r\n\tbool initialized = false;\r\n}Sprite;\r\n\r\n//=========================================================================================\r\n// Class\r\n//=========================================================================================\r\nclass PPGraphics\r\n{\r\npublic:\r\n\tColor PrimaryColor = Color{ 76, 175, 80, 255 };\r\n\tColor PrimaryDarkColor = Color{ 0, 150, 136, 255 };\r\n\tColor AccentColor = Color{ 230, 126, 34, 255 };\r\n\tColor AccentDarkColor = Color{ 211, 84, 0, 255 };\r\n\tColor PrimaryTextColor = Color{ 38, 50, 56, 255 };\r\n\tColor AccentTextColor = Color{ 255, 255, 255, 255 };\r\n\r\n\tColor TransBackgroundDark = Color{3, 3, 3, 180};\r\n\r\nprivate:\r\n\tC3D_RenderTarget*\t\t\t\tmRenderTargetTop = nullptr;\r\n\tC3D_Mtx\t\t\t\t\t\t\tmProjectionTop;\r\n\tC3D_RenderTarget*\t\t\t\tmRenderTargetBtm = nullptr;\r\n\tC3D_Mtx\t\t\t\t\t\t\tmProjectionBtm;\r\n\r\n\tDVLB_s*\t\t\t\t\t\t\tmVShaderDVLB;\r\n\tshaderProgram_s\t\t\t\t\tmShaderProgram;\r\n\tint\t\t\t\t\t\t\t\tmULocProjection;\r\n\r\n\t// system font\r\n\tC3D_Tex*\t\t\t\t\t\tmGlyphSheets;\r\n\r\n\t// top screen sprite\r\n\tSprite*\t\t\t\t\t\t\tmTopScreenSprite;\r\n\r\n\t// Temporary memory pool\r\n\tvoid\t\t\t\t\t\t\t*memoryPoolAddr = NULL;\r\n\tu32\t\t\t\t\t\t\t\tmemoryPoolIndex = 0;\r\n\tu32\t\t\t\t\t\t\t\tmemoryPoolSize = 0;\r\n\r\n\tgfxScreen_t\t\t\t\t\t\tmCurrentDrawScreen = GFX_TOP;\r\n\tint\t\t\t\t\t\t\t\tmRendering = 0;\r\n\r\n\t// Texture cache\r\n\tstd::map<std::string, Sprite*>\tmTexCached;\r\n\r\n\tvoid setupForPosCollEnv(void* vertices);\r\n\tvoid setupForPosTexlEnv(void* vertices, u32 color, int texID);\r\n\r\n\tint getTextUnit(GPU_TEXUNIT unit);\r\n\t\r\n\r\n\tvoid *allocMemoryPoolAligned(u32 size, u32 alignment);\r\n\tvoid resetMemoryPool() { memoryPoolIndex = 0; }\r\n\r\npublic:\r\n\t~PPGraphics();\r\n\tstatic PPGraphics* Get();\r\n\r\n\tvoid GraphicsInit();\r\n\tvoid GraphicExit();\r\n\r\n\t// cache functions\r\n\tSprite* AddCacheImageAsset(const char* name, std::string key);\r\n\tSprite* AddCacheImage(const char* path, std::string key);\r\n\tSprite* AddCacheImage(u8 *buf, u32 size, std::string key);\r\n\tSprite* GetCacheImage(std::string key);\r\n\r\n\t// draw functions\r\n\tvoid BeginRender();\r\n\tvoid RenderOn(gfxScreen_t screen);\r\n\tvoid EndRender();\r\n\r\n\tvoid UpdateTopScreenSprite(u8* data, u32 size);\r\n\tvoid DrawTopScreenSprite();\r\n\r\n\t// Draw image\r\n\tvoid DrawImage(Sprite* sprite, int x, int y);\r\n\tvoid DrawImage(std::string key, int x, int y) { DrawImage(GetCacheImage(key), x, y); }\r\n\tvoid DrawImage(Sprite* sprite, int x, int y, int w, int h);\r\n\tvoid DrawImage(std::string key, int x, int y, int w, int h) { DrawImage(GetCacheImage(key), x, y, w, h); }\r\n\tvoid DrawImage(Sprite* sprite, int x, int y, int w, int h, int degrees);\r\n\tvoid DrawImage(std::string key, int x, int y, int w, int h, int degrees) { DrawImage(GetCacheImage(key), x, y, w, h, degrees); }\r\n\tvoid DrawImage(Sprite* sprite, int x, int y, int w, int h, int degrees, Vector2 anchor);\r\n\tvoid DrawImage(std::string key, int x, int y, int w, int h, int degrees, Vector2 anchor) { DrawImage(GetCacheImage(key), x, y, w, h, degrees, anchor); }\r\n\r\n\t// draw rectangle\r\n\tvoid DrawRectangle(float x, float y, float w, float h, Color color, float rounding = 0.0f);\r\n\r\n\t// mask\r\n\tvoid StartMasked(float x, float y, float w, float h, gfxScreen_t screen) const;\r\n\tvoid StopMasked() const;\r\n\r\n\t// draw text\r\n\tvoid DrawText(const char* text, float x, float y, float scaleX, float scaleY, Color color, bool baseline);\r\n\tvoid DrawTextAutoWrap(const char* text, float x, float y, float w, float scaleX, float scaleY, Color color, bool baseline);\r\n\tVector2 GetTextSize(const char* text, float scaleX, float scaleY);\r\n\tVector3 GetTextSizeAutoWrap(const char* text, float scaleX, float scaleY, float w);\r\n};\r\n\r\n\r\n#endif"
  },
  {
    "path": "PinBox/PinBox/include/PPMessage.h",
    "content": "#pragma once\r\n#ifndef _PP_MESSAGE_H_\r\n#define _PP_MESSAGE_H_\r\n\r\n#include <3ds.h>\r\n#include <cstdint>\r\n#include <cstring>\r\n#include <cstdlib>\r\n#include <cstdio>\r\n\r\n#define WRITE_CHAR_PTR(BUFFER, DATA, SIZE) memcpy(BUFFER, DATA, SIZE); BUFFER += SIZE;\r\n#define WRITE_U8(BUFFER, DATA) *(BUFFER++) = DATA & 0xff;\r\n#define WRITE_U16(BUFFER, DATA) *(BUFFER++) = DATA & 0xff; *(BUFFER++) = (DATA >> 8) & 0xff;\r\n#define WRITE_U32(BUFFER, DATA) *(BUFFER++) = DATA; *(BUFFER++) = DATA >> 8; *(BUFFER++) = DATA >> 16; *(BUFFER++) = DATA >> 24;\r\n#define READ_U8(BUFFER, INDEX) BUFFER[INDEX];\r\n#define READ_U16(BUFFER, INDEX) BUFFER[INDEX] | BUFFER[INDEX + 1] << 8;\r\n#define READ_U32(BUFFER, INDEX) BUFFER[INDEX] | BUFFER[INDEX + 1] << 8 | BUFFER[INDEX + 2] << 16 | BUFFER[INDEX + 3] << 24;\r\n#define IS_INVALID_CODE(BUFFER, INDEX) BUFFER[INDEX] != 'P' || BUFFER[INDEX+1] != 'P' || BUFFER[INDEX+2] != 'B' || BUFFER[INDEX+3] != 'X'\r\n\r\nclass PPMessage\r\n{\r\nprivate:\r\n\t//-----------------------------------------------\r\n\t// message header - 9 bytes\r\n\t// [4b] validate code : \"PPBX\"\r\n\t// [1b] message code : 0 to 255 - define message type\r\n\t// [4b] message content size \r\n\t//-----------------------------------------------\r\n\tconst char\t\t\t\t\t\t\tg_validateCode[4] = { 'P','P','B','X' };\r\n\tu8\t\t\t\t\t\t\t\t\tg_code = 0;\r\n\tu32\t\t\t\t\t\t\t\t\tg_contentSize = 0;\r\n\r\n\t//-----------------------------------------------\r\n\t// message content\r\n\t//-----------------------------------------------\r\n\tu8*\t\t\t\t\t\t\t\t\tg_content = nullptr;\r\npublic:\r\n\t~PPMessage();\r\n\tu32\t\t\t\t\t\t\t\t\tGetMessageSize() const { return g_contentSize + 9; }\r\n\tu8*\t\t\t\t\t\t\t\t\tGetMessageContent() { return g_content; }\r\n\tu32\t\t\t\t\t\t\t\t\tGetContentSize() { return g_contentSize; }\r\n\tu8\t\t\t\t\t\t\t\t\tGetMessageCode() { return g_code; }\r\n\t//-----------------------------------------------\r\n\t// NOTE: after using this message, returned data must be free whenever it sent.\r\n\t//-----------------------------------------------\r\n\tu8*\t\t\t\t\t\t\t\t\tBuildMessage(u8* contentBuffer, u32 contentSize);\r\n\tu8*\t\t\t\t\t\t\t\t\tBuildMessageEmpty();\r\n\tvoid\t\t\t\t\t\t\t\tBuildMessageHeader(u8 code);\r\n\r\n\tbool\t\t\t\t\t\t\t\tParseHeader(u8* buffer);\r\n\tvoid\t\t\t\t\t\t\t\tClearHeader();\r\n};\r\n\r\n#endif"
  },
  {
    "path": "PinBox/PinBox/include/PPSession.h",
    "content": "#pragma once\r\n#ifndef _PP_SESSION_H_\r\n#define _PP_SESSION_H_\r\n\r\n//=======================================================================\r\n// PinBox Video Session\r\n// container streaming data can be movie or screen capture\r\n// With:\r\n// 1, Movie :\r\n// TODO: using ffmpeg to decode movie stream from server\r\n// TODO: support 3D movie on N3DS\r\n// TODO: mainly support RGB565 for lower size and speed up stream\r\n// 2, Screen Capture:\r\n// TODO: capture entire screen or a part of screen that config in server\r\n//-----------------------------------------------------------------------\r\n// Note: each session is running standalone and only 1 session can be run\r\n// at a time.\r\n//=======================================================================\r\n\r\n#include <3ds.h>\r\n#include <map>\r\n#include <string.h>\r\n#include <sys/types.h>\r\n#include <sys/socket.h>\r\n#include <netinet/in.h>\r\n#include <arpa/inet.h>\r\n#include <netdb.h> \r\n#include <fcntl.h>\r\n#include <memory>\r\n#include <cerrno>\r\n#include <functional>\r\n#include <queue>\r\n\r\n#include <constant.h>\r\n#include \"Mutex.h\"\r\n#include \"PPMessage.h\"\r\n#include \"HubItem.h\"\r\n\r\nenum PPSession_Type { PPSESSION_NONE, PPSESSION_MOVIE, PPSESSION_SCREEN_CAPTURE, PPSESSION_INPUT_CAPTURE};\r\n\r\n#define MSG_COMMAND_SIZE 9\r\n\r\n#define PPREQUEST_AUTHEN 50\r\n#define PPREQUEST_HEADER 10\r\n#define PPREQUEST_BODY 15\r\n// authentication code\r\n#define MSG_CODE_REQUEST_AUTHENTICATION_SESSION 1\r\n\r\n#define MSG_CODE_RESULT_AUTHENTICATION_SUCCESS 5\r\n#define MSG_CODE_RESULT_AUTHENTICATION_FAILED 6\r\n// screen capture code\r\n#define MSG_CODE_REQUEST_START_SCREEN_CAPTURE 10\r\n#define MSG_CODE_REQUEST_STOP_SCREEN_CAPTURE 11\r\n#define MSG_CODE_REQUEST_CHANGE_SETTING_SCREEN_CAPTURE 12\r\n#define MSG_CODE_REQUEST_NEW_SCREEN_FRAME 15\r\n#define MSG_CODE_REQUEST_SCREEN_RECEIVED_FRAME 16\r\n#define MSG_CODE_REQUEST_NEW_AUDIO_FRAME 18\r\n#define MSG_CODE_REQUEST_RECEIVED_AUDIO_FRAME 19\r\n\r\n\r\n// input\r\n#define MSG_CODE_SEND_INPUT_CAPTURE 42\r\n#define MSG_CODE_SEND_INPUT_CAPTURE_IDLE 44\r\n\r\n// hub\r\n#define MSG_CODE_REQUEST_HUB_ITEMS 60\r\n#define MSG_CODE_RECEIVED_HUB_ITEMS 61\r\n\r\n// audio\r\n#define AUDIO_CHANNEL\t0x08\r\n\r\n\r\ntypedef struct\r\n{\r\n\tvoid\t\t*msgBuffer;\r\n\tu32\t\t\tmsgSize;\r\n} QueueMessage;\r\n\r\n\r\nclass PPSessionManager;\r\n\r\n\r\nenum ppConectState { IDLE, CONNECTING, CONNECTED, FAIL };\r\ntypedef std::function<void(u8* buffer, u32 size, u32 tag)> PPNetworkReceivedRequest;\r\ntypedef std::function<void(u8* data, u32 code)> PPNetworkCallback;\r\n\r\nclass PPSession\r\n{\r\nprivate:\r\n\tPPSessionManager\t\t\t\t*_manager;\r\n\tPPMessage*\t\t\t\t\t\t_tmpMessage = nullptr;\r\n\tbool\t\t\t\t\t\t\t_authenticated = false;\r\n\tstd::queue<QueueMessage*>\t\t_sendingMessages;\r\n\tMutex*\t\t\t\t\t\t\t_queueMessageMutex;\r\nprivate:\r\n\t// threading\r\n\tbool\t\t\t\t\t\t\t_running = false;\r\n\tbool\t\t\t\t\t\t\t_kill = false;\r\n\tThread\t\t\t\t\t\t\t_thread;\r\n\t// socket\r\n\tconst char*\t\t\t\t\t\t_ip = 0;\r\n\tconst char*\t\t\t\t\t\t_port = 0;\r\n\tint\t\t\t\t\t\t\t\t_sock = -1;\r\n\tppConectState\t\t\t\t\t_connect_state = IDLE;\r\n\r\n\t// test connection result\r\n\tint\tvolatile\t\t\t\t\t_testConnectionResult = 0;\r\n\r\n\t// hub items\r\n\tstd::vector<HubItem*>\t\t\t_hubItems;\r\n\r\n\tvoid connectToServer();\r\n\tvoid closeConnect();\r\n\tvoid recvSocketData();\r\n\tvoid sendMessageData();\r\n\r\n\tvoid processReceivedMsg(u8* buffer, u32 size, u32 tag);;\r\n\tvoid processMessageData(u8* buffer, size_t size);\r\n\r\npublic:\r\n\tPPSession();\r\n\t~PPSession();\r\n\r\n\tint GetTestConnectionResult() const { return _testConnectionResult; }\r\n\tvoid InitTestSession(PPSessionManager* manager, const char* ip, const char* port);\r\n\tvoid threadTest();\r\n\r\n\tvoid InitSession(PPSessionManager* manager, const char* ip, const char* port);\r\n\tvoid threadMain();\r\n\tvoid ReleaseSession();\r\n\tvoid CleanUp();\r\n\r\n\tvoid StartStream();\r\n\tvoid StopStream();\r\n\r\n\tvoid RequestForData(u32 size, u32 tag = 0);\r\n\tvoid AddMessageToQueue(u8 *msgBuffer, int32_t msgSize);\r\n\r\n\tint GetHubItemCount() { return _hubItems.size(); }\r\n\tHubItem* GetHubItem(int i) { return _hubItems.at(i); }\r\n\r\nprivate:\r\n\tbool\t\t\t\t\t\t\t\tisInputStarted = false;\r\n\tbool\t\t\t\t\t\t\t\tisSessionStarted = false;\r\n\r\npublic:\r\n\t// Authentication\r\n\tvoid\t\t\t\t\t\t\t\tSendMsgAuthentication();\r\n\r\n\t// Stream\r\n\tvoid\t\t\t\t\t\t\t\tSendMsgStartStream();\r\n\tvoid\t\t\t\t\t\t\t\tSendMsgStopStream();\r\n\r\n\t// Setting\r\n\tvoid\t\t\t\t\t\t\t\tSendMsgChangeSetting();\r\n\r\n\t// Hub \r\n\tvoid\t\t\t\t\t\t\t\tSendMsgRequestHubItems();\r\n\r\n\t// Input\r\n\tbool\t\t\t\t\t\t\t\tSendMsgSendInputData(u32 down, u32 up, short cx, short cy, short ctx, short cty);\r\n};\r\n\r\n#endif"
  },
  {
    "path": "PinBox/PinBox/include/PPSessionManager.h",
    "content": "#pragma once\r\n#include <vector>\r\n#include \"PPSession.h\"\r\n#include <webp/decode.h>\r\n#include <turbojpeg.h>\r\n#include \"opusfile.h\"\r\n#include <map>\r\n#include <stack>\r\n#include \"PPDecoder.h\"\r\n\r\n#include \"constant.h\"\r\n#include \"ConfigManager.h\"\r\n\r\nenum SessionState\r\n{\r\n\tSS_NOT_CONNECTED = 0,\r\n\tSS_CONNECTING,\r\n\tSS_CONNECTED,\r\n\tSS_PAIRED,\r\n\tSS_STREAMING,\r\n\r\n\tSS_FAILED\r\n};\r\n\r\ntypedef struct\r\n{\r\n\tu8\t\t*buffer;\r\n\tu32\t\t\tsize;\r\n} VideoFrame;\r\n\r\nenum BusyState\r\n{\r\n\tBS_NONE = 0,\r\n\tBS_AUTHENTICATION,\r\n\tBS_HUB_ITEMS\r\n};\r\n\r\nclass PPSessionManager\r\n{\r\nprotected:\r\n\tPPSession*\t\t\t\t\t\t\t\t\t\t_session = nullptr;\r\n\tPPSession*\t\t\t\t\t\t\t\t\t\t_testSession = nullptr;\r\n\r\n\t//--------------------------------------------\r\n\t// decoder\r\n\t//--------------------------------------------\r\n\tPPDecoder*\t\t\t\t\t\t\t\t\t\t_decoder = nullptr;\r\n\r\n\t//--------------------------------------------\r\n\t// input\r\n\t//--------------------------------------------\r\n\tu32\t\t\t\t\t\t\t\t\t\t\t\t_oldDown;\r\n\tu32\t\t\t\t\t\t\t\t\t\t\t\t_oldUp;\r\n\tshort\t\t\t\t\t\t\t\t\t\t\t_oldCX;\r\n\tshort\t\t\t\t\t\t\t\t\t\t\t_oldCY;\r\n\tshort\t\t\t\t\t\t\t\t\t\t\t_oldCTX;\r\n\tshort\t\t\t\t\t\t\t\t\t\t\t_oldCTY;\r\n\tbool\t\t\t\t\t\t\t\t\t\t\t_initInputFirstFrame = false;\r\n\r\n\t//--------------------------------------------\r\n\t// fps\r\n\t//--------------------------------------------\r\n\tu64\t\t\t\t\t\t\t\t\t\t\t\t_lastRenderTime = 0;\r\n\tfloat\t\t\t\t\t\t\t\t\t\t\t_currentRenderFPS = 0.0f;\r\n\tu32\t\t\t\t\t\t\t\t\t\t\t\t_renderFrames = 0;\r\n\r\n\tu64\t\t\t\t\t\t\t\t\t\t\t\t_lastVideoTime = 0;\r\n\tfloat\t\t\t\t\t\t\t\t\t\t\t_currentVideoFPS = 0.0f;\r\n\tu32\t\t\t\t\t\t\t\t\t\t\t\t_videoFrame = 0;\r\n\tbool\t\t\t\t\t\t\t\t\t\t\t_receivedFirstVideoFrame = false;\r\n\r\n\r\n\t\r\n\tSessionState\t\t\t\t\t\t\t\t\t_sessionState = SS_NOT_CONNECTED;\r\n\tBusyState\t\t\t\t\t\t\t\t\t\t_busyState = BS_NONE;\r\npublic:\r\n\tPPSessionManager();\r\n\t~PPSessionManager();\r\n\r\n\t/**\r\n\t * \\brief Test connections\r\n\t * \\param config server informations \r\n\t * \\return  0 as pending\\n\r\n\t *\t\t\t-1 as failed\\n\r\n\t *\t\t\t1 as successfully\r\n\t */\r\n\tint TestConnection(ServerConfig *config);\r\n\r\n\t/**\r\n\t * \\brief Must call after @ref InitNewSession to connect to server by config.\r\n\t * update follow @ref GetSessionState to get connection state\r\n\t * @ref GetSessionState return @ref SS_FAILED if connect error\r\n\t * \t\t\t\t\t\treturn @ref SS_PAIRED if connect and authentication successfully\r\n\t * \\param config server informations\r\n\t * \\return @ref SessionState\r\n\t */\r\n\tSessionState ConnectToServer(ServerConfig* config);\r\n\r\n\t/**\r\n\t * \\brief Disconnect to Paired server\r\n\t *  @ref GetSessionState will return @ref SS_NOT_CONNECTED\r\n\t */\r\n\tvoid DisconnectToServer();\r\n\r\n\tint GetHubItemCount() { return _session->GetHubItemCount(); }\r\n\tHubItem* GetHubItem(int i) { return _session->GetHubItem(i); }\r\n\r\n\t/**\r\n\t * \\brief Get current session state\r\n\t * \\return @ref SessionState\r\n\t */\r\n\tSessionState GetSessionState() const { return _sessionState; };\r\n\r\n\r\n\t/**\r\n\t * \\brief Set current session state\r\n\t * \\param v new state that apply for session\r\n\t */\r\n\tvoid SetSessionState(SessionState v) { _sessionState = v; };\r\n\r\n\r\n\t/**\r\n\t * \\brief init video and audio decoder\r\n\t */\r\n\tvoid InitDecoder();\r\n\r\n\t/**\r\n\t * \\brief release video and audio decoder\r\n\t */\r\n\tvoid ReleaseDecoder();\r\n\r\n\r\n\t/**\r\n\t * \\brief update new stream setting to server\r\n\t * user need update config from @ref PPConfigManager first before call this function\r\n\t */\r\n\tvoid UpdateStreamSetting();\r\n\r\n\t/**\r\n\t* \\brief get server's controller profiles ( it only return key name of each profiles )\r\n\t*/\r\n\tvoid GetControllerProfiles();\r\n\r\n\r\n\t/**\r\n\t * \\brief Send authentication message\r\n\t */\r\n\tvoid Authentication();\r\n\r\n\r\n\tBusyState GetBusyState() { return _busyState; }\r\n\tvoid SetBusyState(BusyState bs) { _busyState = bs; }\r\n\r\n\t/**\r\n\t * \\brief Tell server that client want to start streaming\r\n\t * \r\n\t */\r\n\tvoid StartStreaming();\r\n\r\n\tvoid StopStreaming();\r\n\r\n\tvoid StartFPSCounter();\r\n\tvoid UpdateFPSCounter();\r\n\r\n\tvoid UpdateInputStream(u32 down, u32 up, short cx, short cy, short ctx, short cty);\r\n\tvoid ProcessVideoFrame(u8* buffer, u32 size);\r\n\tvoid ProcessAudioFrame(u8* buffer, u32 size);\r\n\r\n\t\r\n\r\n\tvoid DrawVideoFrame();\r\n\r\n\t\r\n\r\n\t// fps\r\n\tfloat GetFPS() const { return _currentRenderFPS; }\r\n\tfloat GetVideoFPS() const { return _currentVideoFPS; }\r\n\r\n\r\n};\r\n\r\n"
  },
  {
    "path": "PinBox/PinBox/include/PPUI.h",
    "content": "#pragma once\r\n#ifndef _PP_UI_H_\r\n#define _PP_UI_H_\r\n#include <3ds.h>\r\n#include <string>\r\n#include <citro3d.h>\r\n#include \"PPGraphics.h\"\r\n#include \"PPSessionManager.h\"\r\n#include \"easing.h\"\r\n#include \"Color.h\"\r\n\r\n//#define UI_DEBUG 1\r\n#define RET_CLOSE_APP -1000\r\n\r\n#define SCROLL_THRESHOLD 5\r\n#define SCROLL_BAR_MIN 0.2f\r\n#define SCROLL_SPEED_MODIFIED 0.2f\r\n#define SCROLL_SPEED_STEP 0.18f\r\n\r\nenum Direction\r\n{\r\n\tD_NONE = 0,\r\n\tD_HORIZONTAL = 1,\r\n\tD_VERTICAL = 2,\r\n\tD_BOTH = 3\r\n};\r\n\r\ntypedef struct DialogBoxOverride {\r\n\tbool isActivate = false;\r\n\tColor TitleBgColor;\r\n\tColor TitleTextColor;\r\n\tconst char* Title = nullptr;\r\n\tconst char* Body = nullptr;\r\n};\r\n\r\ntypedef struct WH {\r\n\tint width;\r\n\tint height;\r\n};\r\n\r\ntypedef std::function<void(float x, float y, float w, float h)> TabContentDraw;\r\ntypedef std::function<int()> PopupCallback;\r\ntypedef std::function<WH()> WHCallback;\r\ntypedef std::function<void(void* arg1, void* arg2)> ResultCallback;\r\n\r\nclass PPUI\r\n{\r\n\r\npublic:\r\n\tstatic u32 getKeyDown();\r\n\tstatic u32 getKeyHold();\r\n\tstatic u32 getKeyUp();\r\n\tstatic circlePosition getLeftCircle();\r\n\tstatic circlePosition getRightCircle();\r\n\r\n\tstatic u32 getSleepModeState();\r\n\r\n\tstatic void UpdateInput();\r\n\tstatic bool TouchDownOnArea(float x, float y, float w, float h);\r\n\tstatic bool TouchUpOnArea(float x, float y, float w, float h);\r\n\tstatic bool TouchDown();\r\n\tstatic bool TouchMove();\r\n\tstatic bool TouchUp();\r\n\r\n\r\n\t// RESOURCES\r\n\tstatic void InitResource();\r\n\tstatic void CleanupResource();\r\n\r\n\t// SCREEN\r\n\tstatic int DrawIdleTopScreen(PPSessionManager *sessionManager);\r\n\r\n\r\n\tstatic int DrawBtmServerSelectScreen(PPSessionManager *sessionManager);\r\n\tstatic int DrawBtmAddNewServerProfileScreen(PPSessionManager *sessionManager, ResultCallback cancel, ResultCallback ok);\r\n\tstatic int DrawBtmPairedScreen(PPSessionManager *sessionManager);\r\n\r\n\tstatic int DrawStreamConfigUI(PPSessionManager *sessionManager, ResultCallback cancel, ResultCallback ok);\r\n\tstatic int DrawIdleBottomScreen(PPSessionManager *sessionManager);\r\n\r\n\tstatic void InfoBox(PPSessionManager *sessionManager);\r\n\r\n\t// TAB\r\n\tstatic int DrawTabs(const char* tabs[], u32 tabCount, int activeTab, float x, float y, float w, float h);\r\n\r\n\t// DIALOG\r\n\tstatic void OverrideDialogTypeWarning();\r\n\tstatic void OverrideDialogTypeInfo();\r\n\tstatic void OverrideDialogTypeSuccess();\r\n\tstatic void OverrideDialogTypeCritical();\r\n\r\n\tstatic void OverrideDialogContent(const char* title, const char* body);\r\n\r\n\tstatic int DrawDialogKeyboard( ResultCallback cancelCallback, ResultCallback okCallback);\r\n\tstatic int DrawDialogNumberInput( ResultCallback cancelCallback, ResultCallback okCallback);\r\n\tstatic int DrawDialogLoading(const char* title, const char* body, PopupCallback callback);\r\n\tstatic int DrawDialogMessage(PPSessionManager *sessionManager, const char* title, const char* body);\r\n\tstatic int DrawDialogMessage(PPSessionManager *sessionManager, const char* title, const char* body, PopupCallback closeCallback);\r\n\tstatic int DrawDialogMessage(PPSessionManager *sessionManager, const char* title, const char* body, PopupCallback cancelCallback, PopupCallback okCallback);\r\n\r\n\tstatic int DrawDialogBox(PPSessionManager *sessionManager);\r\n\r\n\r\n\t// SCROLL BOX\r\n\tstatic Vector2 ScrollBox(float x, float y, float w, float h, Direction dir, Vector2 cursor, WHCallback contentDraw);\r\n\r\n\t// SLIDE\r\n\tstatic float Slide(float x, float y, float w, float h, float val, float min, float max, float step, const char* label);\r\n\t\r\n\t// CHECKBOX\r\n\tstatic bool ToggleBox(float x, float y, float w, float h, bool value, const char* label);\r\n\r\n\t// SELECT BOX\r\n\tstatic bool SelectBox(float x, float y, float w, float h, Color color, float rounding);\r\n\r\n\t// BUTTON\r\n\tstatic bool FlatButton(float x, float y, float w, float h, const char* label, float rounding = 0.0f);\r\n\tstatic bool FlatDarkButton(float x, float y, float w, float h, const char* label, float rounding = 0.0f);\r\n\tstatic bool FlatColorButton(float x, float y, float w, float h, const char* label, Color colNormal, Color colActive, Color txtCol, float rounding = 0.0f);\r\n\r\n\tstatic bool RepeatButton(float x, float y, float w, float h, const char* label, Color colNormal, Color colActive, Color txtCol);\r\n\r\n\t// TEXT\r\n\tstatic int LabelBox(float x, float y, float w, float h, const char* label, Color bgColor, Color txtColor, float scale = 0.5f, float rounding = 0.f);\r\n\tstatic int LabelBoxAutoWrap(float x, float y, float w, float h, const char* label, Color bgColor, Color txtColor, float scale = 0.5f, float rounding = 0.f);\r\n\tstatic int LabelBoxLeft(float x, float y, float w, float h, const char* label, Color bgColor, Color txtColor, float scale = 0.5f, float rounding = 0.f);\r\n\r\n\t// POPUP\r\n\tstatic bool HasPopup();\r\n\tstatic PopupCallback GetPopup();\r\n\tstatic void ClosePopup();\r\n\tstatic void AddPopup(PopupCallback callback);\r\n\r\n};\r\n\r\n#endif"
  },
  {
    "path": "PinBox/PinBox/include/constant.h",
    "content": "#pragma once\n\n#define SOC_ALIGN       0x1000\n#define SOC_BUFFERSIZE  0x100000\n\n#define CONSOLE_DEBUG 1\n#define USE_CITRA 1"
  },
  {
    "path": "PinBox/PinBox/include/easing.h",
    "content": "#pragma once \n\nenum easing_functions\n{\n\tEaseInSine,\n\tEaseOutSine,\n\tEaseInOutSine,\n\tEaseInQuad,\n\tEaseOutQuad,\n\tEaseInOutQuad,\n\tEaseInCubic,\n\tEaseOutCubic,\n\tEaseInOutCubic,\n\tEaseInQuart,\n\tEaseOutQuart,\n\tEaseInOutQuart,\n\tEaseInQuint,\n\tEaseOutQuint,\n\tEaseInOutQuint,\n\tEaseInExpo,\n\tEaseOutExpo,\n\tEaseInOutExpo,\n\tEaseInCirc,\n\tEaseOutCirc,\n\tEaseInOutCirc,\n\tEaseInBack,\n\tEaseOutBack,\n\tEaseInOutBack,\n\tEaseInElastic,\n\tEaseOutElastic,\n\tEaseInOutElastic,\n\tEaseInBounce,\n\tEaseOutBounce,\n\tEaseInOutBounce\n};\n\ntypedef double(*easingFunction)(double);\n\neasingFunction getEasingFunction( easing_functions function );\n\n"
  },
  {
    "path": "PinBox/PinBox/include/lodepng.h",
    "content": "/*\nLodePNG version 20180819\n\nCopyright (c) 2005-2018 Lode Vandevenne\n\nThis software is provided 'as-is', without any express or implied\nwarranty. In no event will the authors be held liable for any damages\narising from the use of this software.\n\nPermission is granted to anyone to use this software for any purpose,\nincluding commercial applications, and to alter it and redistribute it\nfreely, subject to the following restrictions:\n\n1. The origin of this software must not be misrepresented; you must not\nclaim that you wrote the original software. If you use this software\nin a product, an acknowledgment in the product documentation would be\nappreciated but is not required.\n\n2. Altered source versions must be plainly marked as such, and must not be\nmisrepresented as being the original software.\n\n3. This notice may not be removed or altered from any source\ndistribution.\n*/\n\n#ifndef LODEPNG_H\n#define LODEPNG_H\n\n#include <string.h> /*for size_t*/\n\nextern const char* LODEPNG_VERSION_STRING;\n\n/*\nThe following #defines are used to create code sections. They can be disabled\nto disable code sections, which can give faster compile time and smaller binary.\nThe \"NO_COMPILE\" defines are designed to be used to pass as defines to the\ncompiler command to disable them without modifying this header, e.g.\n-DLODEPNG_NO_COMPILE_ZLIB for gcc.\nIn addition to those below, you can also define LODEPNG_NO_COMPILE_CRC to\nallow implementing a custom lodepng_crc32.\n*/\n/*deflate & zlib. If disabled, you must specify alternative zlib functions in\nthe custom_zlib field of the compress and decompress settings*/\n#ifndef LODEPNG_NO_COMPILE_ZLIB\n#define LODEPNG_COMPILE_ZLIB\n#endif\n/*png encoder and png decoder*/\n#ifndef LODEPNG_NO_COMPILE_PNG\n#define LODEPNG_COMPILE_PNG\n#endif\n/*deflate&zlib decoder and png decoder*/\n#ifndef LODEPNG_NO_COMPILE_DECODER\n#define LODEPNG_COMPILE_DECODER\n#endif\n/*deflate&zlib encoder and png encoder*/\n#ifndef LODEPNG_NO_COMPILE_ENCODER\n#define LODEPNG_COMPILE_ENCODER\n#endif\n/*the optional built in harddisk file loading and saving functions*/\n#ifndef LODEPNG_NO_COMPILE_DISK\n#define LODEPNG_COMPILE_DISK\n#endif\n/*support for chunks other than IHDR, IDAT, PLTE, tRNS, IEND: ancillary and unknown chunks*/\n#ifndef LODEPNG_NO_COMPILE_ANCILLARY_CHUNKS\n#define LODEPNG_COMPILE_ANCILLARY_CHUNKS\n#endif\n/*ability to convert error numerical codes to English text string*/\n#ifndef LODEPNG_NO_COMPILE_ERROR_TEXT\n#define LODEPNG_COMPILE_ERROR_TEXT\n#endif\n/*Compile the default allocators (C's free, malloc and realloc). If you disable this,\nyou can define the functions lodepng_free, lodepng_malloc and lodepng_realloc in your\nsource files with custom allocators.*/\n#ifndef LODEPNG_NO_COMPILE_ALLOCATORS\n#define LODEPNG_COMPILE_ALLOCATORS\n#endif\n/*compile the C++ version (you can disable the C++ wrapper here even when compiling for C++)*/\n#ifdef __cplusplus\n#ifndef LODEPNG_NO_COMPILE_CPP\n#define LODEPNG_COMPILE_CPP\n#endif\n#endif\n\n#ifdef LODEPNG_COMPILE_CPP\n#include <vector>\n#include <string>\n#endif /*LODEPNG_COMPILE_CPP*/\n\n#ifdef LODEPNG_COMPILE_PNG\n/*The PNG color types (also used for raw).*/\ntypedef enum LodePNGColorType\n{\n\tLCT_GREY = 0, /*greyscale: 1,2,4,8,16 bit*/\n\tLCT_RGB = 2, /*RGB: 8,16 bit*/\n\tLCT_PALETTE = 3, /*palette: 1,2,4,8 bit*/\n\tLCT_GREY_ALPHA = 4, /*greyscale with alpha: 8,16 bit*/\n\tLCT_RGBA = 6 /*RGB with alpha: 8,16 bit*/\n} LodePNGColorType;\n\n#ifdef LODEPNG_COMPILE_DECODER\n/*\nConverts PNG data in memory to raw pixel data.\nout: Output parameter. Pointer to buffer that will contain the raw pixel data.\nAfter decoding, its size is w * h * (bytes per pixel) bytes larger than\ninitially. Bytes per pixel depends on colortype and bitdepth.\nMust be freed after usage with free(*out).\nNote: for 16-bit per channel colors, uses big endian format like PNG does.\nw: Output parameter. Pointer to width of pixel data.\nh: Output parameter. Pointer to height of pixel data.\nin: Memory buffer with the PNG file.\ninsize: size of the in buffer.\ncolortype: the desired color type for the raw output image. See explanation on PNG color types.\nbitdepth: the desired bit depth for the raw output image. See explanation on PNG color types.\nReturn value: LodePNG error code (0 means no error).\n*/\nunsigned lodepng_decode_memory(unsigned char** out, unsigned* w, unsigned* h,\n\tconst unsigned char* in, size_t insize,\n\tLodePNGColorType colortype, unsigned bitdepth);\n\n/*Same as lodepng_decode_memory, but always decodes to 32-bit RGBA raw image*/\nunsigned lodepng_decode32(unsigned char** out, unsigned* w, unsigned* h,\n\tconst unsigned char* in, size_t insize);\n\n/*Same as lodepng_decode_memory, but always decodes to 24-bit RGB raw image*/\nunsigned lodepng_decode24(unsigned char** out, unsigned* w, unsigned* h,\n\tconst unsigned char* in, size_t insize);\n\n#ifdef LODEPNG_COMPILE_DISK\n/*\nLoad PNG from disk, from file with given name.\nSame as the other decode functions, but instead takes a filename as input.\n*/\nunsigned lodepng_decode_file(unsigned char** out, unsigned* w, unsigned* h,\n\tconst char* filename,\n\tLodePNGColorType colortype, unsigned bitdepth);\n\n/*Same as lodepng_decode_file, but always decodes to 32-bit RGBA raw image.*/\nunsigned lodepng_decode32_file(unsigned char** out, unsigned* w, unsigned* h,\n\tconst char* filename);\n\n/*Same as lodepng_decode_file, but always decodes to 24-bit RGB raw image.*/\nunsigned lodepng_decode24_file(unsigned char** out, unsigned* w, unsigned* h,\n\tconst char* filename);\n#endif /*LODEPNG_COMPILE_DISK*/\n#endif /*LODEPNG_COMPILE_DECODER*/\n\n\n#ifdef LODEPNG_COMPILE_ENCODER\n/*\nConverts raw pixel data into a PNG image in memory. The colortype and bitdepth\nof the output PNG image cannot be chosen, they are automatically determined\nby the colortype, bitdepth and content of the input pixel data.\nNote: for 16-bit per channel colors, needs big endian format like PNG does.\nout: Output parameter. Pointer to buffer that will contain the PNG image data.\nMust be freed after usage with free(*out).\noutsize: Output parameter. Pointer to the size in bytes of the out buffer.\nimage: The raw pixel data to encode. The size of this buffer should be\nw * h * (bytes per pixel), bytes per pixel depends on colortype and bitdepth.\nw: width of the raw pixel data in pixels.\nh: height of the raw pixel data in pixels.\ncolortype: the color type of the raw input image. See explanation on PNG color types.\nbitdepth: the bit depth of the raw input image. See explanation on PNG color types.\nReturn value: LodePNG error code (0 means no error).\n*/\nunsigned lodepng_encode_memory(unsigned char** out, size_t* outsize,\n\tconst unsigned char* image, unsigned w, unsigned h,\n\tLodePNGColorType colortype, unsigned bitdepth);\n\n/*Same as lodepng_encode_memory, but always encodes from 32-bit RGBA raw image.*/\nunsigned lodepng_encode32(unsigned char** out, size_t* outsize,\n\tconst unsigned char* image, unsigned w, unsigned h);\n\n/*Same as lodepng_encode_memory, but always encodes from 24-bit RGB raw image.*/\nunsigned lodepng_encode24(unsigned char** out, size_t* outsize,\n\tconst unsigned char* image, unsigned w, unsigned h);\n\n#ifdef LODEPNG_COMPILE_DISK\n/*\nConverts raw pixel data into a PNG file on disk.\nSame as the other encode functions, but instead takes a filename as output.\nNOTE: This overwrites existing files without warning!\n*/\nunsigned lodepng_encode_file(const char* filename,\n\tconst unsigned char* image, unsigned w, unsigned h,\n\tLodePNGColorType colortype, unsigned bitdepth);\n\n/*Same as lodepng_encode_file, but always encodes from 32-bit RGBA raw image.*/\nunsigned lodepng_encode32_file(const char* filename,\n\tconst unsigned char* image, unsigned w, unsigned h);\n\n/*Same as lodepng_encode_file, but always encodes from 24-bit RGB raw image.*/\nunsigned lodepng_encode24_file(const char* filename,\n\tconst unsigned char* image, unsigned w, unsigned h);\n#endif /*LODEPNG_COMPILE_DISK*/\n#endif /*LODEPNG_COMPILE_ENCODER*/\n\n\n#ifdef LODEPNG_COMPILE_CPP\nnamespace lodepng\n{\n#ifdef LODEPNG_COMPILE_DECODER\n\t/*Same as lodepng_decode_memory, but decodes to an std::vector. The colortype\n\tis the format to output the pixels to. Default is RGBA 8-bit per channel.*/\n\tunsigned decode(std::vector<unsigned char>& out, unsigned& w, unsigned& h,\n\t\tconst unsigned char* in, size_t insize,\n\t\tLodePNGColorType colortype = LCT_RGBA, unsigned bitdepth = 8);\n\tunsigned decode(std::vector<unsigned char>& out, unsigned& w, unsigned& h,\n\t\tconst std::vector<unsigned char>& in,\n\t\tLodePNGColorType colortype = LCT_RGBA, unsigned bitdepth = 8);\n#ifdef LODEPNG_COMPILE_DISK\n\t/*\n\tConverts PNG file from disk to raw pixel data in memory.\n\tSame as the other decode functions, but instead takes a filename as input.\n\t*/\n\tunsigned decode(std::vector<unsigned char>& out, unsigned& w, unsigned& h,\n\t\tconst std::string& filename,\n\t\tLodePNGColorType colortype = LCT_RGBA, unsigned bitdepth = 8);\n#endif /* LODEPNG_COMPILE_DISK */\n#endif /* LODEPNG_COMPILE_DECODER */\n\n#ifdef LODEPNG_COMPILE_ENCODER\n\t/*Same as lodepng_encode_memory, but encodes to an std::vector. colortype\n\tis that of the raw input data. The output PNG color type will be auto chosen.*/\n\tunsigned encode(std::vector<unsigned char>& out,\n\t\tconst unsigned char* in, unsigned w, unsigned h,\n\t\tLodePNGColorType colortype = LCT_RGBA, unsigned bitdepth = 8);\n\tunsigned encode(std::vector<unsigned char>& out,\n\t\tconst std::vector<unsigned char>& in, unsigned w, unsigned h,\n\t\tLodePNGColorType colortype = LCT_RGBA, unsigned bitdepth = 8);\n#ifdef LODEPNG_COMPILE_DISK\n\t/*\n\tConverts 32-bit RGBA raw pixel data into a PNG file on disk.\n\tSame as the other encode functions, but instead takes a filename as output.\n\tNOTE: This overwrites existing files without warning!\n\t*/\n\tunsigned encode(const std::string& filename,\n\t\tconst unsigned char* in, unsigned w, unsigned h,\n\t\tLodePNGColorType colortype = LCT_RGBA, unsigned bitdepth = 8);\n\tunsigned encode(const std::string& filename,\n\t\tconst std::vector<unsigned char>& in, unsigned w, unsigned h,\n\t\tLodePNGColorType colortype = LCT_RGBA, unsigned bitdepth = 8);\n#endif /* LODEPNG_COMPILE_DISK */\n#endif /* LODEPNG_COMPILE_ENCODER */\n} /* namespace lodepng */\n#endif /*LODEPNG_COMPILE_CPP*/\n#endif /*LODEPNG_COMPILE_PNG*/\n\n#ifdef LODEPNG_COMPILE_ERROR_TEXT\n  /*Returns an English description of the numerical error code.*/\nconst char* lodepng_error_text(unsigned code);\n#endif /*LODEPNG_COMPILE_ERROR_TEXT*/\n\n#ifdef LODEPNG_COMPILE_DECODER\n/*Settings for zlib decompression*/\ntypedef struct LodePNGDecompressSettings LodePNGDecompressSettings;\nstruct LodePNGDecompressSettings\n{\n\t/* Check LodePNGDecoderSettings for more ignorable errors such as ignore_crc */\n\tunsigned ignore_adler32; /*if 1, continue and don't give an error message if the Adler32 checksum is corrupted*/\n\n\t\t\t\t\t\t\t /*use custom zlib decoder instead of built in one (default: null)*/\n\tunsigned(*custom_zlib)(unsigned char**, size_t*,\n\t\tconst unsigned char*, size_t,\n\t\tconst LodePNGDecompressSettings*);\n\t/*use custom deflate decoder instead of built in one (default: null)\n\tif custom_zlib is used, custom_deflate is ignored since only the built in\n\tzlib function will call custom_deflate*/\n\tunsigned(*custom_inflate)(unsigned char**, size_t*,\n\t\tconst unsigned char*, size_t,\n\t\tconst LodePNGDecompressSettings*);\n\n\tconst void* custom_context; /*optional custom settings for custom functions*/\n};\n\nextern const LodePNGDecompressSettings lodepng_default_decompress_settings;\nvoid lodepng_decompress_settings_init(LodePNGDecompressSettings* settings);\n#endif /*LODEPNG_COMPILE_DECODER*/\n\n#ifdef LODEPNG_COMPILE_ENCODER\n/*\nSettings for zlib compression. Tweaking these settings tweaks the balance\nbetween speed and compression ratio.\n*/\ntypedef struct LodePNGCompressSettings LodePNGCompressSettings;\nstruct LodePNGCompressSettings /*deflate = compress*/\n{\n\t/*LZ77 related settings*/\n\tunsigned btype; /*the block type for LZ (0, 1, 2 or 3, see zlib standard). Should be 2 for proper compression.*/\n\tunsigned use_lz77; /*whether or not to use LZ77. Should be 1 for proper compression.*/\n\tunsigned windowsize; /*must be a power of two <= 32768. higher compresses more but is slower. Default value: 2048.*/\n\tunsigned minmatch; /*mininum lz77 length. 3 is normally best, 6 can be better for some PNGs. Default: 0*/\n\tunsigned nicematch; /*stop searching if >= this length found. Set to 258 for best compression. Default: 128*/\n\tunsigned lazymatching; /*use lazy matching: better compression but a bit slower. Default: true*/\n\n\t\t\t\t\t\t   /*use custom zlib encoder instead of built in one (default: null)*/\n\tunsigned(*custom_zlib)(unsigned char**, size_t*,\n\t\tconst unsigned char*, size_t,\n\t\tconst LodePNGCompressSettings*);\n\t/*use custom deflate encoder instead of built in one (default: null)\n\tif custom_zlib is used, custom_deflate is ignored since only the built in\n\tzlib function will call custom_deflate*/\n\tunsigned(*custom_deflate)(unsigned char**, size_t*,\n\t\tconst unsigned char*, size_t,\n\t\tconst LodePNGCompressSettings*);\n\n\tconst void* custom_context; /*optional custom settings for custom functions*/\n};\n\nextern const LodePNGCompressSettings lodepng_default_compress_settings;\nvoid lodepng_compress_settings_init(LodePNGCompressSettings* settings);\n#endif /*LODEPNG_COMPILE_ENCODER*/\n\n#ifdef LODEPNG_COMPILE_PNG\n/*\nColor mode of an image. Contains all information required to decode the pixel\nbits to RGBA colors. This information is the same as used in the PNG file\nformat, and is used both for PNG and raw image data in LodePNG.\n*/\ntypedef struct LodePNGColorMode\n{\n\t/*header (IHDR)*/\n\tLodePNGColorType colortype; /*color type, see PNG standard or documentation further in this header file*/\n\tunsigned bitdepth;  /*bits per sample, see PNG standard or documentation further in this header file*/\n\n\t\t\t\t\t\t/*\n\t\t\t\t\t\tpalette (PLTE and tRNS)\n\n\t\t\t\t\t\tDynamically allocated with the colors of the palette, including alpha.\n\t\t\t\t\t\tWhen encoding a PNG, to store your colors in the palette of the LodePNGColorMode, first use\n\t\t\t\t\t\tlodepng_palette_clear, then for each color use lodepng_palette_add.\n\t\t\t\t\t\tIf you encode an image without alpha with palette, don't forget to put value 255 in each A byte of the palette.\n\n\t\t\t\t\t\tWhen decoding, by default you can ignore this palette, since LodePNG already\n\t\t\t\t\t\tfills the palette colors in the pixels of the raw RGBA output.\n\n\t\t\t\t\t\tThe palette is only supported for color type 3.\n\t\t\t\t\t\t*/\n\tunsigned char* palette; /*palette in RGBARGBA... order. When allocated, must be either 0, or have size 1024*/\n\tsize_t palettesize; /*palette size in number of colors (amount of bytes is 4 * palettesize)*/\n\n\t\t\t\t\t\t/*\n\t\t\t\t\t\ttransparent color key (tRNS)\n\n\t\t\t\t\t\tThis color uses the same bit depth as the bitdepth value in this struct, which can be 1-bit to 16-bit.\n\t\t\t\t\t\tFor greyscale PNGs, r, g and b will all 3 be set to the same.\n\n\t\t\t\t\t\tWhen decoding, by default you can ignore this information, since LodePNG sets\n\t\t\t\t\t\tpixels with this key to transparent already in the raw RGBA output.\n\n\t\t\t\t\t\tThe color key is only supported for color types 0 and 2.\n\t\t\t\t\t\t*/\n\tunsigned key_defined; /*is a transparent color key given? 0 = false, 1 = true*/\n\tunsigned key_r;       /*red/greyscale component of color key*/\n\tunsigned key_g;       /*green component of color key*/\n\tunsigned key_b;       /*blue component of color key*/\n} LodePNGColorMode;\n\n/*init, cleanup and copy functions to use with this struct*/\nvoid lodepng_color_mode_init(LodePNGColorMode* info);\nvoid lodepng_color_mode_cleanup(LodePNGColorMode* info);\n/*return value is error code (0 means no error)*/\nunsigned lodepng_color_mode_copy(LodePNGColorMode* dest, const LodePNGColorMode* source);\n\nvoid lodepng_palette_clear(LodePNGColorMode* info);\n/*add 1 color to the palette*/\nunsigned lodepng_palette_add(LodePNGColorMode* info,\n\tunsigned char r, unsigned char g, unsigned char b, unsigned char a);\n\n/*get the total amount of bits per pixel, based on colortype and bitdepth in the struct*/\nunsigned lodepng_get_bpp(const LodePNGColorMode* info);\n/*get the amount of color channels used, based on colortype in the struct.\nIf a palette is used, it counts as 1 channel.*/\nunsigned lodepng_get_channels(const LodePNGColorMode* info);\n/*is it a greyscale type? (only colortype 0 or 4)*/\nunsigned lodepng_is_greyscale_type(const LodePNGColorMode* info);\n/*has it got an alpha channel? (only colortype 2 or 6)*/\nunsigned lodepng_is_alpha_type(const LodePNGColorMode* info);\n/*has it got a palette? (only colortype 3)*/\nunsigned lodepng_is_palette_type(const LodePNGColorMode* info);\n/*only returns true if there is a palette and there is a value in the palette with alpha < 255.\nLoops through the palette to check this.*/\nunsigned lodepng_has_palette_alpha(const LodePNGColorMode* info);\n/*\nCheck if the given color info indicates the possibility of having non-opaque pixels in the PNG image.\nReturns true if the image can have translucent or invisible pixels (it still be opaque if it doesn't use such pixels).\nReturns false if the image can only have opaque pixels.\nIn detail, it returns true only if it's a color type with alpha, or has a palette with non-opaque values,\nor if \"key_defined\" is true.\n*/\nunsigned lodepng_can_have_alpha(const LodePNGColorMode* info);\n/*Returns the byte size of a raw image buffer with given width, height and color mode*/\nsize_t lodepng_get_raw_size(unsigned w, unsigned h, const LodePNGColorMode* color);\n\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n/*The information of a Time chunk in PNG.*/\ntypedef struct LodePNGTime\n{\n\tunsigned year;    /*2 bytes used (0-65535)*/\n\tunsigned month;   /*1-12*/\n\tunsigned day;     /*1-31*/\n\tunsigned hour;    /*0-23*/\n\tunsigned minute;  /*0-59*/\n\tunsigned second;  /*0-60 (to allow for leap seconds)*/\n} LodePNGTime;\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\n/*Information about the PNG image, except pixels, width and height.*/\ntypedef struct LodePNGInfo\n{\n\t/*header (IHDR), palette (PLTE) and transparency (tRNS) chunks*/\n\tunsigned compression_method;/*compression method of the original file. Always 0.*/\n\tunsigned filter_method;     /*filter method of the original file*/\n\tunsigned interlace_method;  /*interlace method of the original file: 0=none, 1=Adam7*/\n\tLodePNGColorMode color;     /*color type and bits, palette and transparency of the PNG file*/\n\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\t\t\t\t\t\t\t\t/*\n\t\t\t\t\t\t\t\tSuggested background color chunk (bKGD)\n\n\t\t\t\t\t\t\t\tThis uses the same color mode and bit depth as the PNG (except no alpha channel),\n\t\t\t\t\t\t\t\twith values truncated to the bit depth in the unsigned integer.\n\n\t\t\t\t\t\t\t\tFor greyscale and palette PNGs, the value is stored in background_r. The values\n\t\t\t\t\t\t\t\tin background_g and background_b are then unused.\n\n\t\t\t\t\t\t\t\tSo when decoding, you may get these in a different color mode than the one you requested\n\t\t\t\t\t\t\t\tfor the raw pixels.\n\n\t\t\t\t\t\t\t\tWhen encoding with auto_convert, you must use the color model defined in info_png.color for\n\t\t\t\t\t\t\t\tthese values. The encoder normally ignores info_png.color when auto_convert is on, but will\n\t\t\t\t\t\t\t\tuse it to interpret these values (and convert copies of them to its chosen color model).\n\n\t\t\t\t\t\t\t\tWhen encoding, avoid setting this to an expensive color, such as a non-grey value\n\t\t\t\t\t\t\t\twhen the image is grey, or the compression will be worse since it will be forced to\n\t\t\t\t\t\t\t\twrite the PNG with a more expensive color mode (when auto_convert is on).\n\n\t\t\t\t\t\t\t\tThe decoder does not use this background color to edit the color of pixels. This is a\n\t\t\t\t\t\t\t\tcompletely optional metadata feature.\n\t\t\t\t\t\t\t\t*/\n\tunsigned background_defined; /*is a suggested background color given?*/\n\tunsigned background_r;       /*red/grey/palette component of suggested background color*/\n\tunsigned background_g;       /*green component of suggested background color*/\n\tunsigned background_b;       /*blue component of suggested background color*/\n\n\t\t\t\t\t\t\t\t /*\n\t\t\t\t\t\t\t\t non-international text chunks (tEXt and zTXt)\n\n\t\t\t\t\t\t\t\t The char** arrays each contain num strings. The actual messages are in\n\t\t\t\t\t\t\t\t text_strings, while text_keys are keywords that give a short description what\n\t\t\t\t\t\t\t\t the actual text represents, e.g. Title, Author, Description, or anything else.\n\n\t\t\t\t\t\t\t\t All the string fields below including keys, names and language tags are null terminated.\n\t\t\t\t\t\t\t\t The PNG specification uses null characters for the keys, names and tags, and forbids null\n\t\t\t\t\t\t\t\t characters to appear in the main text which is why we can use null termination everywhere here.\n\n\t\t\t\t\t\t\t\t A keyword is minimum 1 character and maximum 79 characters long. It's\n\t\t\t\t\t\t\t\t discouraged to use a single line length longer than 79 characters for texts.\n\n\t\t\t\t\t\t\t\t Don't allocate these text buffers yourself. Use the init/cleanup functions\n\t\t\t\t\t\t\t\t correctly and use lodepng_add_text and lodepng_clear_text.\n\t\t\t\t\t\t\t\t */\n\tsize_t text_num; /*the amount of texts in these char** buffers (there may be more texts in itext)*/\n\tchar** text_keys; /*the keyword of a text chunk (e.g. \"Comment\")*/\n\tchar** text_strings; /*the actual text*/\n\n\t\t\t\t\t\t /*\n\t\t\t\t\t\t international text chunks (iTXt)\n\t\t\t\t\t\t Similar to the non-international text chunks, but with additional strings\n\t\t\t\t\t\t \"langtags\" and \"transkeys\".\n\t\t\t\t\t\t */\n\tsize_t itext_num; /*the amount of international texts in this PNG*/\n\tchar** itext_keys; /*the English keyword of the text chunk (e.g. \"Comment\")*/\n\tchar** itext_langtags; /*language tag for this text's language, ISO/IEC 646 string, e.g. ISO 639 language tag*/\n\tchar** itext_transkeys; /*keyword translated to the international language - UTF-8 string*/\n\tchar** itext_strings; /*the actual international text - UTF-8 string*/\n\n\t\t\t\t\t\t  /*time chunk (tIME)*/\n\tunsigned time_defined; /*set to 1 to make the encoder generate a tIME chunk*/\n\tLodePNGTime time;\n\n\t/*phys chunk (pHYs)*/\n\tunsigned phys_defined; /*if 0, there is no pHYs chunk and the values below are undefined, if 1 else there is one*/\n\tunsigned phys_x; /*pixels per unit in x direction*/\n\tunsigned phys_y; /*pixels per unit in y direction*/\n\tunsigned phys_unit; /*may be 0 (unknown unit) or 1 (metre)*/\n\n\t\t\t\t\t\t/*\n\t\t\t\t\t\tColor profile related chunks: gAMA, cHRM, sRGB, iCPP\n\n\t\t\t\t\t\tLodePNG does not apply any color conversions on pixels in the encoder or decoder and does not interpret these color\n\t\t\t\t\t\tprofile values. It merely passes on the information. If you wish to use color profiles and convert colors, please\n\t\t\t\t\t\tuse these values with a color management library.\n\n\t\t\t\t\t\tSee the PNG, ICC and sRGB specifications for more information about the meaning of these values.\n\t\t\t\t\t\t*/\n\n\t\t\t\t\t\t/* gAMA chunk: optional, overridden by sRGB or iCCP if those are present. */\n\tunsigned gama_defined; /* Whether a gAMA chunk is present (0 = not present, 1 = present). */\n\tunsigned gama_gamma;   /* Gamma exponent times 100000 */\n\n\t\t\t\t\t\t   /* cHRM chunk: optional, overridden by sRGB or iCCP if those are present. */\n\tunsigned chrm_defined; /* Whether a cHRM chunk is present (0 = not present, 1 = present). */\n\tunsigned chrm_white_x; /* White Point x times 100000 */\n\tunsigned chrm_white_y; /* White Point y times 100000 */\n\tunsigned chrm_red_x;   /* Red x times 100000 */\n\tunsigned chrm_red_y;   /* Red y times 100000 */\n\tunsigned chrm_green_x; /* Green x times 100000 */\n\tunsigned chrm_green_y; /* Green y times 100000 */\n\tunsigned chrm_blue_x;  /* Blue x times 100000 */\n\tunsigned chrm_blue_y;  /* Blue y times 100000 */\n\n\t\t\t\t\t\t   /*\n\t\t\t\t\t\t   sRGB chunk: optional. May not appear at the same time as iCCP.\n\t\t\t\t\t\t   If gAMA is also present gAMA must contain value 45455.\n\t\t\t\t\t\t   If cHRM is also present cHRM must contain respectively 31270,32900,64000,33000,30000,60000,15000,6000.\n\t\t\t\t\t\t   */\n\tunsigned srgb_defined; /* Whether an sRGB chunk is present (0 = not present, 1 = present). */\n\tunsigned srgb_intent;  /* Rendering intent: 0=perceptual, 1=rel. colorimetric, 2=saturation, 3=abs. colorimetric */\n\n\t\t\t\t\t\t   /*\n\t\t\t\t\t\t   iCCP chunk: optional. May not appear at the same time as sRGB.\n\n\t\t\t\t\t\t   LodePNG does not parse or use the ICC profile (except its color space header field for an edge case), a\n\t\t\t\t\t\t   separate library to handle the ICC data (not included in LodePNG) format is needed to use it for color\n\t\t\t\t\t\t   management and conversions.\n\n\t\t\t\t\t\t   For encoding, if iCCP is present, gAMA and cHRM are recommended to be added as well with values that match the ICC\n\t\t\t\t\t\t   profile as closely as possible, if you wish to do this you should provide the correct values for gAMA and cHRM and\n\t\t\t\t\t\t   enable their '_defined' flags since LodePNG will not automatically compute them from the ICC profile.\n\n\t\t\t\t\t\t   For encoding, the ICC profile is required by the PNG specification to be an \"RGB\" profile for non-grey\n\t\t\t\t\t\t   PNG color types and a \"GRAY\" profile for grey PNG color types. If you disable auto_convert, you must ensure\n\t\t\t\t\t\t   the ICC profile type matches your requested color type, else the encoder gives an error. If auto_convert is\n\t\t\t\t\t\t   enabled (the default), and the ICC profile is not a good match for the pixel data, this will result in an encoder\n\t\t\t\t\t\t   error if the pixel data has non-grey pixels for a GRAY profile, or a silent less-optimal compression of the pixel\n\t\t\t\t\t\t   data if the pixels could be encoded as greyscale but the ICC profile is RGB.\n\n\t\t\t\t\t\t   To avoid this do not set an ICC profile in the image unless there is a good reason for it, and when doing so\n\t\t\t\t\t\t   make sure you compute it carefully to avoid the above problems.\n\t\t\t\t\t\t   */\n\tunsigned iccp_defined;      /* Whether an iCCP chunk is present (0 = not present, 1 = present). */\n\tchar* iccp_name;            /* Null terminated string with profile name, 1-79 bytes */\n\t\t\t\t\t\t\t\t/*\n\t\t\t\t\t\t\t\tThe ICC profile in iccp_profile_size bytes.\n\t\t\t\t\t\t\t\tDon't allocate this buffer yourself. Use the init/cleanup functions\n\t\t\t\t\t\t\t\tcorrectly and use lodepng_set_icc and lodepng_clear_icc.\n\t\t\t\t\t\t\t\t*/\n\tunsigned char* iccp_profile;\n\tunsigned iccp_profile_size; /* The size of iccp_profile in bytes */\n\n\t\t\t\t\t\t\t\t/* End of color profile related chunks */\n\n\n\t\t\t\t\t\t\t\t/*\n\t\t\t\t\t\t\t\tunknown chunks: chunks not known by LodePNG, passed on byte for byte.\n\n\t\t\t\t\t\t\t\tThere are 3 buffers, one for each position in the PNG where unknown chunks can appear.\n\t\t\t\t\t\t\t\tEach buffer contains all unknown chunks for that position consecutively.\n\t\t\t\t\t\t\t\tThe 3 positions are:\n\t\t\t\t\t\t\t\t0: between IHDR and PLTE, 1: between PLTE and IDAT, 2: between IDAT and IEND.\n\n\t\t\t\t\t\t\t\tFor encoding, do not store critical chunks or known chunks that are enabled with a \"_defined\" flag\n\t\t\t\t\t\t\t\tabove in here, since the encoder will blindly follow this and could then encode an invalid PNG file\n\t\t\t\t\t\t\t\t(such as one with two IHDR chunks or the disallowed combination of sRGB with iCCP). But do use\n\t\t\t\t\t\t\t\tthis if you wish to store an ancillary chunk that is not supported by LodePNG (such as sPLT or hIST),\n\t\t\t\t\t\t\t\tor any non-standard PNG chunk.\n\n\t\t\t\t\t\t\t\tDo not allocate or traverse this data yourself. Use the chunk traversing functions declared\n\t\t\t\t\t\t\t\tlater, such as lodepng_chunk_next and lodepng_chunk_append, to read/write this struct.\n\t\t\t\t\t\t\t\t*/\n\tunsigned char* unknown_chunks_data[3];\n\tsize_t unknown_chunks_size[3]; /*size in bytes of the unknown chunks, given for protection*/\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n} LodePNGInfo;\n\n/*init, cleanup and copy functions to use with this struct*/\nvoid lodepng_info_init(LodePNGInfo* info);\nvoid lodepng_info_cleanup(LodePNGInfo* info);\n/*return value is error code (0 means no error)*/\nunsigned lodepng_info_copy(LodePNGInfo* dest, const LodePNGInfo* source);\n\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\nunsigned lodepng_add_text(LodePNGInfo* info, const char* key, const char* str); /*push back both texts at once*/\nvoid lodepng_clear_text(LodePNGInfo* info); /*use this to clear the texts again after you filled them in*/\n\nunsigned lodepng_add_itext(LodePNGInfo* info, const char* key, const char* langtag,\n\tconst char* transkey, const char* str); /*push back the 4 texts of 1 chunk at once*/\nvoid lodepng_clear_itext(LodePNGInfo* info); /*use this to clear the itexts again after you filled them in*/\n\n\t\t\t\t\t\t\t\t\t\t\t /*replaces if exists*/\nunsigned lodepng_set_icc(LodePNGInfo* info, const char* name, const unsigned char* profile, unsigned profile_size);\nvoid lodepng_clear_icc(LodePNGInfo* info); /*use this to clear the texts again after you filled them in*/\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\n\t\t\t\t\t\t\t\t\t\t   /*\n\t\t\t\t\t\t\t\t\t\t   Converts raw buffer from one color type to another color type, based on\n\t\t\t\t\t\t\t\t\t\t   LodePNGColorMode structs to describe the input and output color type.\n\t\t\t\t\t\t\t\t\t\t   See the reference manual at the end of this header file to see which color conversions are supported.\n\t\t\t\t\t\t\t\t\t\t   return value = LodePNG error code (0 if all went ok, an error if the conversion isn't supported)\n\t\t\t\t\t\t\t\t\t\t   The out buffer must have size (w * h * bpp + 7) / 8, where bpp is the bits per pixel\n\t\t\t\t\t\t\t\t\t\t   of the output color type (lodepng_get_bpp).\n\t\t\t\t\t\t\t\t\t\t   For < 8 bpp images, there should not be padding bits at the end of scanlines.\n\t\t\t\t\t\t\t\t\t\t   For 16-bit per channel colors, uses big endian format like PNG does.\n\t\t\t\t\t\t\t\t\t\t   Return value is LodePNG error code\n\t\t\t\t\t\t\t\t\t\t   */\nunsigned lodepng_convert(unsigned char* out, const unsigned char* in,\n\tconst LodePNGColorMode* mode_out, const LodePNGColorMode* mode_in,\n\tunsigned w, unsigned h);\n\n#ifdef LODEPNG_COMPILE_DECODER\n/*\nSettings for the decoder. This contains settings for the PNG and the Zlib\ndecoder, but not the Info settings from the Info structs.\n*/\ntypedef struct LodePNGDecoderSettings\n{\n\tLodePNGDecompressSettings zlibsettings; /*in here is the setting to ignore Adler32 checksums*/\n\n\t\t\t\t\t\t\t\t\t\t\t/* Check LodePNGDecompressSettings for more ignorable errors such as ignore_adler32 */\n\tunsigned ignore_crc; /*ignore CRC checksums*/\n\tunsigned ignore_critical; /*ignore unknown critical chunks*/\n\tunsigned ignore_end; /*ignore issues at end of file if possible (missing IEND chunk, too large chunk, ...)*/\n\t\t\t\t\t\t /* TODO: make a system involving warnings with levels and a strict mode instead. Other potentially recoverable\n\t\t\t\t\t\t errors: srgb rendering intent value, size of content of ancillary chunks, more than 79 characters for some\n\t\t\t\t\t\t strings, placement/combination rules for ancillary chunks, crc of unknown chunks, allowed characters\n\t\t\t\t\t\t in string keys, etc... */\n\n\tunsigned color_convert; /*whether to convert the PNG to the color type you want. Default: yes*/\n\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\tunsigned read_text_chunks; /*if false but remember_unknown_chunks is true, they're stored in the unknown chunks*/\n\t\t\t\t\t\t\t   /*store all bytes from unknown chunks in the LodePNGInfo (off by default, useful for a png editor)*/\n\tunsigned remember_unknown_chunks;\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n} LodePNGDecoderSettings;\n\nvoid lodepng_decoder_settings_init(LodePNGDecoderSettings* settings);\n#endif /*LODEPNG_COMPILE_DECODER*/\n\n#ifdef LODEPNG_COMPILE_ENCODER\n/*automatically use color type with less bits per pixel if losslessly possible. Default: AUTO*/\ntypedef enum LodePNGFilterStrategy\n{\n\t/*every filter at zero*/\n\tLFS_ZERO,\n\t/*Use filter that gives minimum sum, as described in the official PNG filter heuristic.*/\n\tLFS_MINSUM,\n\t/*Use the filter type that gives smallest Shannon entropy for this scanline. Depending\n\ton the image, this is better or worse than minsum.*/\n\tLFS_ENTROPY,\n\t/*\n\tBrute-force-search PNG filters by compressing each filter for each scanline.\n\tExperimental, very slow, and only rarely gives better compression than MINSUM.\n\t*/\n\tLFS_BRUTE_FORCE,\n\t/*use predefined_filters buffer: you specify the filter type for each scanline*/\n\tLFS_PREDEFINED\n} LodePNGFilterStrategy;\n\n/*Gives characteristics about the integer RGBA colors of the image (count, alpha channel usage, bit depth, ...),\nwhich helps decide which color model to use for encoding.\nUsed internally by default if \"auto_convert\" is enabled. Public because it's useful for custom algorithms.\nNOTE: This is not related to the ICC color profile, search \"iccp_profile\" instead to find the ICC/chromacity/...\nfields in this header file.*/\ntypedef struct LodePNGColorProfile\n{\n\tunsigned colored; /*not greyscale*/\n\tunsigned key; /*image is not opaque and color key is possible instead of full alpha*/\n\tunsigned short key_r; /*key values, always as 16-bit, in 8-bit case the byte is duplicated, e.g. 65535 means 255*/\n\tunsigned short key_g;\n\tunsigned short key_b;\n\tunsigned alpha; /*image is not opaque and alpha channel or alpha palette required*/\n\tunsigned numcolors; /*amount of colors, up to 257. Not valid if bits == 16.*/\n\tunsigned char palette[1024]; /*Remembers up to the first 256 RGBA colors, in no particular order*/\n\tunsigned bits; /*bits per channel (not for palette). 1,2 or 4 for greyscale only. 16 if 16-bit per channel required.*/\n\tsize_t numpixels;\n} LodePNGColorProfile;\n\nvoid lodepng_color_profile_init(LodePNGColorProfile* profile);\n\n/*Get a LodePNGColorProfile of the image. The profile must already have been inited.\nNOTE: This is not related to the ICC color profile, search \"iccp_profile\" instead to find the ICC/chromacity/...\nfields in this header file.*/\nunsigned lodepng_get_color_profile(LodePNGColorProfile* profile,\n\tconst unsigned char* image, unsigned w, unsigned h,\n\tconst LodePNGColorMode* mode_in);\n/*The function LodePNG uses internally to decide the PNG color with auto_convert.\nChooses an optimal color model, e.g. grey if only grey pixels, palette if < 256 colors, ...*/\nunsigned lodepng_auto_choose_color(LodePNGColorMode* mode_out,\n\tconst unsigned char* image, unsigned w, unsigned h,\n\tconst LodePNGColorMode* mode_in);\n\n/*Settings for the encoder.*/\ntypedef struct LodePNGEncoderSettings\n{\n\tLodePNGCompressSettings zlibsettings; /*settings for the zlib encoder, such as window size, ...*/\n\n\tunsigned auto_convert; /*automatically choose output PNG color type. Default: true*/\n\n\t\t\t\t\t\t   /*If true, follows the official PNG heuristic: if the PNG uses a palette or lower than\n\t\t\t\t\t\t   8 bit depth, set all filters to zero. Otherwise use the filter_strategy. Note that to\n\t\t\t\t\t\t   completely follow the official PNG heuristic, filter_palette_zero must be true and\n\t\t\t\t\t\t   filter_strategy must be LFS_MINSUM*/\n\tunsigned filter_palette_zero;\n\t/*Which filter strategy to use when not using zeroes due to filter_palette_zero.\n\tSet filter_palette_zero to 0 to ensure always using your chosen strategy. Default: LFS_MINSUM*/\n\tLodePNGFilterStrategy filter_strategy;\n\t/*used if filter_strategy is LFS_PREDEFINED. In that case, this must point to a buffer with\n\tthe same length as the amount of scanlines in the image, and each value must <= 5. You\n\thave to cleanup this buffer, LodePNG will never free it. Don't forget that filter_palette_zero\n\tmust be set to 0 to ensure this is also used on palette or low bitdepth images.*/\n\tconst unsigned char* predefined_filters;\n\n\t/*force creating a PLTE chunk if colortype is 2 or 6 (= a suggested palette).\n\tIf colortype is 3, PLTE is _always_ created.*/\n\tunsigned force_palette;\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\t/*add LodePNG identifier and version as a text chunk, for debugging*/\n\tunsigned add_id;\n\t/*encode text chunks as zTXt chunks instead of tEXt chunks, and use compression in iTXt chunks*/\n\tunsigned text_compression;\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n} LodePNGEncoderSettings;\n\nvoid lodepng_encoder_settings_init(LodePNGEncoderSettings* settings);\n#endif /*LODEPNG_COMPILE_ENCODER*/\n\n\n#if defined(LODEPNG_COMPILE_DECODER) || defined(LODEPNG_COMPILE_ENCODER)\n/*The settings, state and information for extended encoding and decoding.*/\ntypedef struct LodePNGState\n{\n#ifdef LODEPNG_COMPILE_DECODER\n\tLodePNGDecoderSettings decoder; /*the decoding settings*/\n#endif /*LODEPNG_COMPILE_DECODER*/\n#ifdef LODEPNG_COMPILE_ENCODER\n\tLodePNGEncoderSettings encoder; /*the encoding settings*/\n#endif /*LODEPNG_COMPILE_ENCODER*/\n\tLodePNGColorMode info_raw; /*specifies the format in which you would like to get the raw pixel buffer*/\n\tLodePNGInfo info_png; /*info of the PNG image obtained after decoding*/\n\tunsigned error;\n#ifdef LODEPNG_COMPILE_CPP\n\t/* For the lodepng::State subclass. */\n\tvirtual ~LodePNGState() {}\n#endif\n} LodePNGState;\n\n/*init, cleanup and copy functions to use with this struct*/\nvoid lodepng_state_init(LodePNGState* state);\nvoid lodepng_state_cleanup(LodePNGState* state);\nvoid lodepng_state_copy(LodePNGState* dest, const LodePNGState* source);\n#endif /* defined(LODEPNG_COMPILE_DECODER) || defined(LODEPNG_COMPILE_ENCODER) */\n\n#ifdef LODEPNG_COMPILE_DECODER\n/*\nSame as lodepng_decode_memory, but uses a LodePNGState to allow custom settings and\ngetting much more information about the PNG image and color mode.\n*/\nunsigned lodepng_decode(unsigned char** out, unsigned* w, unsigned* h,\n\tLodePNGState* state,\n\tconst unsigned char* in, size_t insize);\n\n/*\nRead the PNG header, but not the actual data. This returns only the information\nthat is in the IHDR chunk of the PNG, such as width, height and color type. The\ninformation is placed in the info_png field of the LodePNGState.\n*/\nunsigned lodepng_inspect(unsigned* w, unsigned* h,\n\tLodePNGState* state,\n\tconst unsigned char* in, size_t insize);\n#endif /*LODEPNG_COMPILE_DECODER*/\n\n\n#ifdef LODEPNG_COMPILE_ENCODER\n/*This function allocates the out buffer with standard malloc and stores the size in *outsize.*/\nunsigned lodepng_encode(unsigned char** out, size_t* outsize,\n\tconst unsigned char* image, unsigned w, unsigned h,\n\tLodePNGState* state);\n#endif /*LODEPNG_COMPILE_ENCODER*/\n\n/*\nThe lodepng_chunk functions are normally not needed, except to traverse the\nunknown chunks stored in the LodePNGInfo struct, or add new ones to it.\nIt also allows traversing the chunks of an encoded PNG file yourself.\n\nThe chunk pointer always points to the beginning of the chunk itself, that is\nthe first byte of the 4 length bytes.\n\nIn the PNG file format, chunks have the following format:\n-4 bytes length: length of the data of the chunk in bytes (chunk itself is 12 bytes longer)\n-4 bytes chunk name (ASCII a-z,A-Z only, see below)\n-length bytes of data (may be 0 bytes if length was 0)\n-4 bytes of CRC, computed on chunk name + data\n\nThe first chunk starts at the 8th byte of the PNG file, the entire rest of the file\nexists out of concatenated chunks with the above format.\n\nPNG standard chunk ASCII naming conventions:\n-First byte: uppercase = critical, lowercase = ancillary\n-Second byte: uppercase = public, lowercase = private\n-Third byte: must be uppercase\n-Fourth byte: uppercase = unsafe to copy, lowercase = safe to copy\n*/\n\n/*\nGets the length of the data of the chunk. Total chunk length has 12 bytes more.\nThere must be at least 4 bytes to read from. If the result value is too large,\nit may be corrupt data.\n*/\nunsigned lodepng_chunk_length(const unsigned char* chunk);\n\n/*puts the 4-byte type in null terminated string*/\nvoid lodepng_chunk_type(char type[5], const unsigned char* chunk);\n\n/*check if the type is the given type*/\nunsigned char lodepng_chunk_type_equals(const unsigned char* chunk, const char* type);\n\n/*0: it's one of the critical chunk types, 1: it's an ancillary chunk (see PNG standard)*/\nunsigned char lodepng_chunk_ancillary(const unsigned char* chunk);\n\n/*0: public, 1: private (see PNG standard)*/\nunsigned char lodepng_chunk_private(const unsigned char* chunk);\n\n/*0: the chunk is unsafe to copy, 1: the chunk is safe to copy (see PNG standard)*/\nunsigned char lodepng_chunk_safetocopy(const unsigned char* chunk);\n\n/*get pointer to the data of the chunk, where the input points to the header of the chunk*/\nunsigned char* lodepng_chunk_data(unsigned char* chunk);\nconst unsigned char* lodepng_chunk_data_const(const unsigned char* chunk);\n\n/*returns 0 if the crc is correct, 1 if it's incorrect (0 for OK as usual!)*/\nunsigned lodepng_chunk_check_crc(const unsigned char* chunk);\n\n/*generates the correct CRC from the data and puts it in the last 4 bytes of the chunk*/\nvoid lodepng_chunk_generate_crc(unsigned char* chunk);\n\n/*\nIterate to next chunks, allows iterating through all chunks of the PNG file. Expects at least 4 readable\nbytes of memory in the input pointer. Will output pointer to the start of the next chunk or the end of the\nfile if there is no more chunk after this. Start this process at the 8th byte of the PNG file. In a non-corrupt\nPNG file, the last chunk should have name \"IEND\".\n*/\nunsigned char* lodepng_chunk_next(unsigned char* chunk);\nconst unsigned char* lodepng_chunk_next_const(const unsigned char* chunk);\n\n/*\nAppends chunk to the data in out. The given chunk should already have its chunk header.\nThe out variable and outlength are updated to reflect the new reallocated buffer.\nReturns error code (0 if it went ok)\n*/\nunsigned lodepng_chunk_append(unsigned char** out, size_t* outlength, const unsigned char* chunk);\n\n/*\nAppends new chunk to out. The chunk to append is given by giving its length, type\nand data separately. The type is a 4-letter string.\nThe out variable and outlength are updated to reflect the new reallocated buffer.\nReturne error code (0 if it went ok)\n*/\nunsigned lodepng_chunk_create(unsigned char** out, size_t* outlength, unsigned length,\n\tconst char* type, const unsigned char* data);\n\n\n/*Calculate CRC32 of buffer*/\nunsigned lodepng_crc32(const unsigned char* buf, size_t len);\n#endif /*LODEPNG_COMPILE_PNG*/\n\n\n#ifdef LODEPNG_COMPILE_ZLIB\n/*\nThis zlib part can be used independently to zlib compress and decompress a\nbuffer. It cannot be used to create gzip files however, and it only supports the\npart of zlib that is required for PNG, it does not support dictionaries.\n*/\n\n#ifdef LODEPNG_COMPILE_DECODER\n/*Inflate a buffer. Inflate is the decompression step of deflate. Out buffer must be freed after use.*/\nunsigned lodepng_inflate(unsigned char** out, size_t* outsize,\n\tconst unsigned char* in, size_t insize,\n\tconst LodePNGDecompressSettings* settings);\n\n/*\nDecompresses Zlib data. Reallocates the out buffer and appends the data. The\ndata must be according to the zlib specification.\nEither, *out must be NULL and *outsize must be 0, or, *out must be a valid\nbuffer and *outsize its size in bytes. out must be freed by user after usage.\n*/\nunsigned lodepng_zlib_decompress(unsigned char** out, size_t* outsize,\n\tconst unsigned char* in, size_t insize,\n\tconst LodePNGDecompressSettings* settings);\n#endif /*LODEPNG_COMPILE_DECODER*/\n\n#ifdef LODEPNG_COMPILE_ENCODER\n/*\nCompresses data with Zlib. Reallocates the out buffer and appends the data.\nZlib adds a small header and trailer around the deflate data.\nThe data is output in the format of the zlib specification.\nEither, *out must be NULL and *outsize must be 0, or, *out must be a valid\nbuffer and *outsize its size in bytes. out must be freed by user after usage.\n*/\nunsigned lodepng_zlib_compress(unsigned char** out, size_t* outsize,\n\tconst unsigned char* in, size_t insize,\n\tconst LodePNGCompressSettings* settings);\n\n/*\nFind length-limited Huffman code for given frequencies. This function is in the\npublic interface only for tests, it's used internally by lodepng_deflate.\n*/\nunsigned lodepng_huffman_code_lengths(unsigned* lengths, const unsigned* frequencies,\n\tsize_t numcodes, unsigned maxbitlen);\n\n/*Compress a buffer with deflate. See RFC 1951. Out buffer must be freed after use.*/\nunsigned lodepng_deflate(unsigned char** out, size_t* outsize,\n\tconst unsigned char* in, size_t insize,\n\tconst LodePNGCompressSettings* settings);\n\n#endif /*LODEPNG_COMPILE_ENCODER*/\n#endif /*LODEPNG_COMPILE_ZLIB*/\n\n#ifdef LODEPNG_COMPILE_DISK\n/*\nLoad a file from disk into buffer. The function allocates the out buffer, and\nafter usage you should free it.\nout: output parameter, contains pointer to loaded buffer.\noutsize: output parameter, size of the allocated out buffer\nfilename: the path to the file to load\nreturn value: error code (0 means ok)\n*/\nunsigned lodepng_load_file(unsigned char** out, size_t* outsize, const char* filename);\n\n/*\nSave a file from buffer to disk. Warning, if it exists, this function overwrites\nthe file without warning!\nbuffer: the buffer to write\nbuffersize: size of the buffer to write\nfilename: the path to the file to save to\nreturn value: error code (0 means ok)\n*/\nunsigned lodepng_save_file(const unsigned char* buffer, size_t buffersize, const char* filename);\n#endif /*LODEPNG_COMPILE_DISK*/\n\n#ifdef LODEPNG_COMPILE_CPP\n/* The LodePNG C++ wrapper uses std::vectors instead of manually allocated memory buffers. */\nnamespace lodepng\n{\n#ifdef LODEPNG_COMPILE_PNG\n\tclass State : public LodePNGState\n\t{\n\tpublic:\n\t\tState();\n\t\tState(const State& other);\n\t\tvirtual ~State();\n\t\tState& operator=(const State& other);\n\t};\n\n#ifdef LODEPNG_COMPILE_DECODER\n\t/* Same as other lodepng::decode, but using a State for more settings and information. */\n\tunsigned decode(std::vector<unsigned char>& out, unsigned& w, unsigned& h,\n\t\tState& state,\n\t\tconst unsigned char* in, size_t insize);\n\tunsigned decode(std::vector<unsigned char>& out, unsigned& w, unsigned& h,\n\t\tState& state,\n\t\tconst std::vector<unsigned char>& in);\n#endif /*LODEPNG_COMPILE_DECODER*/\n\n#ifdef LODEPNG_COMPILE_ENCODER\n\t/* Same as other lodepng::encode, but using a State for more settings and information. */\n\tunsigned encode(std::vector<unsigned char>& out,\n\t\tconst unsigned char* in, unsigned w, unsigned h,\n\t\tState& state);\n\tunsigned encode(std::vector<unsigned char>& out,\n\t\tconst std::vector<unsigned char>& in, unsigned w, unsigned h,\n\t\tState& state);\n#endif /*LODEPNG_COMPILE_ENCODER*/\n\n#ifdef LODEPNG_COMPILE_DISK\n\t/*\n\tLoad a file from disk into an std::vector.\n\treturn value: error code (0 means ok)\n\t*/\n\tunsigned load_file(std::vector<unsigned char>& buffer, const std::string& filename);\n\n\t/*\n\tSave the binary data in an std::vector to a file on disk. The file is overwritten\n\twithout warning.\n\t*/\n\tunsigned save_file(const std::vector<unsigned char>& buffer, const std::string& filename);\n#endif /* LODEPNG_COMPILE_DISK */\n#endif /* LODEPNG_COMPILE_PNG */\n\n#ifdef LODEPNG_COMPILE_ZLIB\n#ifdef LODEPNG_COMPILE_DECODER\n\t/* Zlib-decompress an unsigned char buffer */\n\tunsigned decompress(std::vector<unsigned char>& out, const unsigned char* in, size_t insize,\n\t\tconst LodePNGDecompressSettings& settings = lodepng_default_decompress_settings);\n\n\t/* Zlib-decompress an std::vector */\n\tunsigned decompress(std::vector<unsigned char>& out, const std::vector<unsigned char>& in,\n\t\tconst LodePNGDecompressSettings& settings = lodepng_default_decompress_settings);\n#endif /* LODEPNG_COMPILE_DECODER */\n\n#ifdef LODEPNG_COMPILE_ENCODER\n\t/* Zlib-compress an unsigned char buffer */\n\tunsigned compress(std::vector<unsigned char>& out, const unsigned char* in, size_t insize,\n\t\tconst LodePNGCompressSettings& settings = lodepng_default_compress_settings);\n\n\t/* Zlib-compress an std::vector */\n\tunsigned compress(std::vector<unsigned char>& out, const std::vector<unsigned char>& in,\n\t\tconst LodePNGCompressSettings& settings = lodepng_default_compress_settings);\n#endif /* LODEPNG_COMPILE_ENCODER */\n#endif /* LODEPNG_COMPILE_ZLIB */\n} /* namespace lodepng */\n#endif /*LODEPNG_COMPILE_CPP*/\n\n  /*\n  TODO:\n  [.] test if there are no memory leaks or security exploits - done a lot but needs to be checked often\n  [.] check compatibility with various compilers  - done but needs to be redone for every newer version\n  [X] converting color to 16-bit per channel types\n  [X] support color profile chunk types (but never let them touch RGB values by default)\n  [ ] support all public PNG chunk types\n  [ ] make sure encoder generates no chunks with size > (2^31)-1\n  [ ] partial decoding (stream processing)\n  [X] let the \"isFullyOpaque\" function check color keys and transparent palettes too\n  [X] better name for the variables \"codes\", \"codesD\", \"codelengthcodes\", \"clcl\" and \"lldl\"\n  [ ] don't stop decoding on errors like 69, 57, 58 (make warnings)\n  [ ] make warnings like: oob palette, checksum fail, data after iend, wrong/unknown crit chunk, no null terminator in text, ...\n  [ ] errors with line numbers (and version)\n  [ ] let the C++ wrapper catch exceptions coming from the standard library and return LodePNG error codes\n  [ ] allow user to provide custom color conversion functions, e.g. for premultiplied alpha, padding bits or not, ...\n  [ ] allow user to give data (void*) to custom allocator\n  */\n\n#endif /*LODEPNG_H inclusion guard*/\n\n  /*\n  LodePNG Documentation\n  ---------------------\n\n  0. table of contents\n  --------------------\n\n  1. about\n  1.1. supported features\n  1.2. features not supported\n  2. C and C++ version\n  3. security\n  4. decoding\n  5. encoding\n  6. color conversions\n  6.1. PNG color types\n  6.2. color conversions\n  6.3. padding bits\n  6.4. A note about 16-bits per channel and endianness\n  7. error values\n  8. chunks and PNG editing\n  9. compiler support\n  10. examples\n  10.1. decoder C++ example\n  10.2. decoder C example\n  11. state settings reference\n  12. changes\n  13. contact information\n\n\n  1. about\n  --------\n\n  PNG is a file format to store raster images losslessly with good compression,\n  supporting different color types and alpha channel.\n\n  LodePNG is a PNG codec according to the Portable Network Graphics (PNG)\n  Specification (Second Edition) - W3C Recommendation 10 November 2003.\n\n  The specifications used are:\n\n  *) Portable Network Graphics (PNG) Specification (Second Edition):\n  http://www.w3.org/TR/2003/REC-PNG-20031110\n  *) RFC 1950 ZLIB Compressed Data Format version 3.3:\n  http://www.gzip.org/zlib/rfc-zlib.html\n  *) RFC 1951 DEFLATE Compressed Data Format Specification ver 1.3:\n  http://www.gzip.org/zlib/rfc-deflate.html\n\n  The most recent version of LodePNG can currently be found at\n  http://lodev.org/lodepng/\n\n  LodePNG works both in C (ISO C90) and C++, with a C++ wrapper that adds\n  extra functionality.\n\n  LodePNG exists out of two files:\n  -lodepng.h: the header file for both C and C++\n  -lodepng.c(pp): give it the name lodepng.c or lodepng.cpp (or .cc) depending on your usage\n\n  If you want to start using LodePNG right away without reading this doc, get the\n  examples from the LodePNG website to see how to use it in code, or check the\n  smaller examples in chapter 13 here.\n\n  LodePNG is simple but only supports the basic requirements. To achieve\n  simplicity, the following design choices were made: There are no dependencies\n  on any external library. There are functions to decode and encode a PNG with\n  a single function call, and extended versions of these functions taking a\n  LodePNGState struct allowing to specify or get more information. By default\n  the colors of the raw image are always RGB or RGBA, no matter what color type\n  the PNG file uses. To read and write files, there are simple functions to\n  convert the files to/from buffers in memory.\n\n  This all makes LodePNG suitable for loading textures in games, demos and small\n  programs, ... It's less suitable for full fledged image editors, loading PNGs\n  over network (it requires all the image data to be available before decoding can\n  begin), life-critical systems, ...\n\n  1.1. supported features\n  -----------------------\n\n  The following features are supported by the decoder:\n\n  *) decoding of PNGs with any color type, bit depth and interlace mode, to a 24- or 32-bit color raw image,\n  or the same color type as the PNG\n  *) encoding of PNGs, from any raw image to 24- or 32-bit color, or the same color type as the raw image\n  *) Adam7 interlace and deinterlace for any color type\n  *) loading the image from harddisk or decoding it from a buffer from other sources than harddisk\n  *) support for alpha channels, including RGBA color model, translucent palettes and color keying\n  *) zlib decompression (inflate)\n  *) zlib compression (deflate)\n  *) CRC32 and ADLER32 checksums\n  *) handling of unknown chunks, allowing making a PNG editor that stores custom and unknown chunks.\n  *) the following chunks are supported (generated/interpreted) by both encoder and decoder:\n  IHDR: header information\n  PLTE: color palette\n  IDAT: pixel data\n  IEND: the final chunk\n  tRNS: transparency for palettized images\n  tEXt: textual information\n  zTXt: compressed textual information\n  iTXt: international textual information\n  bKGD: suggested background color\n  pHYs: physical dimensions\n  tIME: modification time\n\n  1.2. features not supported\n  ---------------------------\n\n  The following features are _not_ supported:\n\n  *) some features needed to make a conformant PNG-Editor might be still missing.\n  *) partial loading/stream processing. All data must be available and is processed in one call.\n  *) The following public chunks are not supported but treated as unknown chunks by LodePNG\n  cHRM, gAMA, iCCP, sRGB, sBIT, hIST, sPLT\n  Some of these are not supported on purpose: LodePNG wants to provide the RGB values\n  stored in the pixels, not values modified by system dependent gamma or color models.\n\n\n  2. C and C++ version\n  --------------------\n\n  The C version uses buffers allocated with alloc that you need to free()\n  yourself. You need to use init and cleanup functions for each struct whenever\n  using a struct from the C version to avoid exploits and memory leaks.\n\n  The C++ version has extra functions with std::vectors in the interface and the\n  lodepng::State class which is a LodePNGState with constructor and destructor.\n\n  These files work without modification for both C and C++ compilers because all\n  the additional C++ code is in \"#ifdef __cplusplus\" blocks that make C-compilers\n  ignore it, and the C code is made to compile both with strict ISO C90 and C++.\n\n  To use the C++ version, you need to rename the source file to lodepng.cpp\n  (instead of lodepng.c), and compile it with a C++ compiler.\n\n  To use the C version, you need to rename the source file to lodepng.c (instead\n  of lodepng.cpp), and compile it with a C compiler.\n\n\n  3. Security\n  -----------\n\n  Even if carefully designed, it's always possible that LodePNG contains possible\n  exploits. If you discover one, please let me know, and it will be fixed.\n\n  When using LodePNG, care has to be taken with the C version of LodePNG, as well\n  as the C-style structs when working with C++. The following conventions are used\n  for all C-style structs:\n\n  -if a struct has a corresponding init function, always call the init function when making a new one\n  -if a struct has a corresponding cleanup function, call it before the struct disappears to avoid memory leaks\n  -if a struct has a corresponding copy function, use the copy function instead of \"=\".\n  The destination must also be inited already.\n\n\n  4. Decoding\n  -----------\n\n  Decoding converts a PNG compressed image to a raw pixel buffer.\n\n  Most documentation on using the decoder is at its declarations in the header\n  above. For C, simple decoding can be done with functions such as\n  lodepng_decode32, and more advanced decoding can be done with the struct\n  LodePNGState and lodepng_decode. For C++, all decoding can be done with the\n  various lodepng::decode functions, and lodepng::State can be used for advanced\n  features.\n\n  When using the LodePNGState, it uses the following fields for decoding:\n  *) LodePNGInfo info_png: it stores extra information about the PNG (the input) in here\n  *) LodePNGColorMode info_raw: here you can say what color mode of the raw image (the output) you want to get\n  *) LodePNGDecoderSettings decoder: you can specify a few extra settings for the decoder to use\n\n  LodePNGInfo info_png\n  --------------------\n\n  After decoding, this contains extra information of the PNG image, except the actual\n  pixels, width and height because these are already gotten directly from the decoder\n  functions.\n\n  It contains for example the original color type of the PNG image, text comments,\n  suggested background color, etc... More details about the LodePNGInfo struct are\n  at its declaration documentation.\n\n  LodePNGColorMode info_raw\n  -------------------------\n\n  When decoding, here you can specify which color type you want\n  the resulting raw image to be. If this is different from the colortype of the\n  PNG, then the decoder will automatically convert the result. This conversion\n  always works, except if you want it to convert a color PNG to greyscale or to\n  a palette with missing colors.\n\n  By default, 32-bit color is used for the result.\n\n  LodePNGDecoderSettings decoder\n  ------------------------------\n\n  The settings can be used to ignore the errors created by invalid CRC and Adler32\n  chunks, and to disable the decoding of tEXt chunks.\n\n  There's also a setting color_convert, true by default. If false, no conversion\n  is done, the resulting data will be as it was in the PNG (after decompression)\n  and you'll have to puzzle the colors of the pixels together yourself using the\n  color type information in the LodePNGInfo.\n\n\n  5. Encoding\n  -----------\n\n  Encoding converts a raw pixel buffer to a PNG compressed image.\n\n  Most documentation on using the encoder is at its declarations in the header\n  above. For C, simple encoding can be done with functions such as\n  lodepng_encode32, and more advanced decoding can be done with the struct\n  LodePNGState and lodepng_encode. For C++, all encoding can be done with the\n  various lodepng::encode functions, and lodepng::State can be used for advanced\n  features.\n\n  Like the decoder, the encoder can also give errors. However it gives less errors\n  since the encoder input is trusted, the decoder input (a PNG image that could\n  be forged by anyone) is not trusted.\n\n  When using the LodePNGState, it uses the following fields for encoding:\n  *) LodePNGInfo info_png: here you specify how you want the PNG (the output) to be.\n  *) LodePNGColorMode info_raw: here you say what color type of the raw image (the input) has\n  *) LodePNGEncoderSettings encoder: you can specify a few settings for the encoder to use\n\n  LodePNGInfo info_png\n  --------------------\n\n  When encoding, you use this the opposite way as when decoding: for encoding,\n  you fill in the values you want the PNG to have before encoding. By default it's\n  not needed to specify a color type for the PNG since it's automatically chosen,\n  but it's possible to choose it yourself given the right settings.\n\n  The encoder will not always exactly match the LodePNGInfo struct you give,\n  it tries as close as possible. Some things are ignored by the encoder. The\n  encoder uses, for example, the following settings from it when applicable:\n  colortype and bitdepth, text chunks, time chunk, the color key, the palette, the\n  background color, the interlace method, unknown chunks, ...\n\n  When encoding to a PNG with colortype 3, the encoder will generate a PLTE chunk.\n  If the palette contains any colors for which the alpha channel is not 255 (so\n  there are translucent colors in the palette), it'll add a tRNS chunk.\n\n  LodePNGColorMode info_raw\n  -------------------------\n\n  You specify the color type of the raw image that you give to the input here,\n  including a possible transparent color key and palette you happen to be using in\n  your raw image data.\n\n  By default, 32-bit color is assumed, meaning your input has to be in RGBA\n  format with 4 bytes (unsigned chars) per pixel.\n\n  LodePNGEncoderSettings encoder\n  ------------------------------\n\n  The following settings are supported (some are in sub-structs):\n  *) auto_convert: when this option is enabled, the encoder will\n  automatically choose the smallest possible color mode (including color key) that\n  can encode the colors of all pixels without information loss.\n  *) btype: the block type for LZ77. 0 = uncompressed, 1 = fixed huffman tree,\n  2 = dynamic huffman tree (best compression). Should be 2 for proper\n  compression.\n  *) use_lz77: whether or not to use LZ77 for compressed block types. Should be\n  true for proper compression.\n  *) windowsize: the window size used by the LZ77 encoder (1 - 32768). Has value\n  2048 by default, but can be set to 32768 for better, but slow, compression.\n  *) force_palette: if colortype is 2 or 6, you can make the encoder write a PLTE\n  chunk if force_palette is true. This can used as suggested palette to convert\n  to by viewers that don't support more than 256 colors (if those still exist)\n  *) add_id: add text chunk \"Encoder: LodePNG <version>\" to the image.\n  *) text_compression: default 1. If 1, it'll store texts as zTXt instead of tEXt chunks.\n  zTXt chunks use zlib compression on the text. This gives a smaller result on\n  large texts but a larger result on small texts (such as a single program name).\n  It's all tEXt or all zTXt though, there's no separate setting per text yet.\n\n\n  6. color conversions\n  --------------------\n\n  An important thing to note about LodePNG, is that the color type of the PNG, and\n  the color type of the raw image, are completely independent. By default, when\n  you decode a PNG, you get the result as a raw image in the color type you want,\n  no matter whether the PNG was encoded with a palette, greyscale or RGBA color.\n  And if you encode an image, by default LodePNG will automatically choose the PNG\n  color type that gives good compression based on the values of colors and amount\n  of colors in the image. It can be configured to let you control it instead as\n  well, though.\n\n  To be able to do this, LodePNG does conversions from one color mode to another.\n  It can convert from almost any color type to any other color type, except the\n  following conversions: RGB to greyscale is not supported, and converting to a\n  palette when the palette doesn't have a required color is not supported. This is\n  not supported on purpose: this is information loss which requires a color\n  reduction algorithm that is beyong the scope of a PNG encoder (yes, RGB to grey\n  is easy, but there are multiple ways if you want to give some channels more\n  weight).\n\n  By default, when decoding, you get the raw image in 32-bit RGBA or 24-bit RGB\n  color, no matter what color type the PNG has. And by default when encoding,\n  LodePNG automatically picks the best color model for the output PNG, and expects\n  the input image to be 32-bit RGBA or 24-bit RGB. So, unless you want to control\n  the color format of the images yourself, you can skip this chapter.\n\n  6.1. PNG color types\n  --------------------\n\n  A PNG image can have many color types, ranging from 1-bit color to 64-bit color,\n  as well as palettized color modes. After the zlib decompression and unfiltering\n  in the PNG image is done, the raw pixel data will have that color type and thus\n  a certain amount of bits per pixel. If you want the output raw image after\n  decoding to have another color type, a conversion is done by LodePNG.\n\n  The PNG specification gives the following color types:\n\n  0: greyscale, bit depths 1, 2, 4, 8, 16\n  2: RGB, bit depths 8 and 16\n  3: palette, bit depths 1, 2, 4 and 8\n  4: greyscale with alpha, bit depths 8 and 16\n  6: RGBA, bit depths 8 and 16\n\n  Bit depth is the amount of bits per pixel per color channel. So the total amount\n  of bits per pixel is: amount of channels * bitdepth.\n\n  6.2. color conversions\n  ----------------------\n\n  As explained in the sections about the encoder and decoder, you can specify\n  color types and bit depths in info_png and info_raw to change the default\n  behaviour.\n\n  If, when decoding, you want the raw image to be something else than the default,\n  you need to set the color type and bit depth you want in the LodePNGColorMode,\n  or the parameters colortype and bitdepth of the simple decoding function.\n\n  If, when encoding, you use another color type than the default in the raw input\n  image, you need to specify its color type and bit depth in the LodePNGColorMode\n  of the raw image, or use the parameters colortype and bitdepth of the simple\n  encoding function.\n\n  If, when encoding, you don't want LodePNG to choose the output PNG color type\n  but control it yourself, you need to set auto_convert in the encoder settings\n  to false, and specify the color type you want in the LodePNGInfo of the\n  encoder (including palette: it can generate a palette if auto_convert is true,\n  otherwise not).\n\n  If the input and output color type differ (whether user chosen or auto chosen),\n  LodePNG will do a color conversion, which follows the rules below, and may\n  sometimes result in an error.\n\n  To avoid some confusion:\n  -the decoder converts from PNG to raw image\n  -the encoder converts from raw image to PNG\n  -the colortype and bitdepth in LodePNGColorMode info_raw, are those of the raw image\n  -the colortype and bitdepth in the color field of LodePNGInfo info_png, are those of the PNG\n  -when encoding, the color type in LodePNGInfo is ignored if auto_convert\n  is enabled, it is automatically generated instead\n  -when decoding, the color type in LodePNGInfo is set by the decoder to that of the original\n  PNG image, but it can be ignored since the raw image has the color type you requested instead\n  -if the color type of the LodePNGColorMode and PNG image aren't the same, a conversion\n  between the color types is done if the color types are supported. If it is not\n  supported, an error is returned. If the types are the same, no conversion is done.\n  -even though some conversions aren't supported, LodePNG supports loading PNGs from any\n  colortype and saving PNGs to any colortype, sometimes it just requires preparing\n  the raw image correctly before encoding.\n  -both encoder and decoder use the same color converter.\n\n  Non supported color conversions:\n  -color to greyscale: no error is thrown, but the result will look ugly because\n  only the red channel is taken\n  -anything to palette when that palette does not have that color in it: in this\n  case an error is thrown\n\n  Supported color conversions:\n  -anything to 8-bit RGB, 8-bit RGBA, 16-bit RGB, 16-bit RGBA\n  -any grey or grey+alpha, to grey or grey+alpha\n  -anything to a palette, as long as the palette has the requested colors in it\n  -removing alpha channel\n  -higher to smaller bitdepth, and vice versa\n\n  If you want no color conversion to be done (e.g. for speed or control):\n  -In the encoder, you can make it save a PNG with any color type by giving the\n  raw color mode and LodePNGInfo the same color mode, and setting auto_convert to\n  false.\n  -In the decoder, you can make it store the pixel data in the same color type\n  as the PNG has, by setting the color_convert setting to false. Settings in\n  info_raw are then ignored.\n\n  The function lodepng_convert does the color conversion. It is available in the\n  interface but normally isn't needed since the encoder and decoder already call\n  it.\n\n  6.3. padding bits\n  -----------------\n\n  In the PNG file format, if a less than 8-bit per pixel color type is used and the scanlines\n  have a bit amount that isn't a multiple of 8, then padding bits are used so that each\n  scanline starts at a fresh byte. But that is NOT true for the LodePNG raw input and output.\n  The raw input image you give to the encoder, and the raw output image you get from the decoder\n  will NOT have these padding bits, e.g. in the case of a 1-bit image with a width\n  of 7 pixels, the first pixel of the second scanline will the the 8th bit of the first byte,\n  not the first bit of a new byte.\n\n  6.4. A note about 16-bits per channel and endianness\n  ----------------------------------------------------\n\n  LodePNG uses unsigned char arrays for 16-bit per channel colors too, just like\n  for any other color format. The 16-bit values are stored in big endian (most\n  significant byte first) in these arrays. This is the opposite order of the\n  little endian used by x86 CPU's.\n\n  LodePNG always uses big endian because the PNG file format does so internally.\n  Conversions to other formats than PNG uses internally are not supported by\n  LodePNG on purpose, there are myriads of formats, including endianness of 16-bit\n  colors, the order in which you store R, G, B and A, and so on. Supporting and\n  converting to/from all that is outside the scope of LodePNG.\n\n  This may mean that, depending on your use case, you may want to convert the big\n  endian output of LodePNG to little endian with a for loop. This is certainly not\n  always needed, many applications and libraries support big endian 16-bit colors\n  anyway, but it means you cannot simply cast the unsigned char* buffer to an\n  unsigned short* buffer on x86 CPUs.\n\n\n  7. error values\n  ---------------\n\n  All functions in LodePNG that return an error code, return 0 if everything went\n  OK, or a non-zero code if there was an error.\n\n  The meaning of the LodePNG error values can be retrieved with the function\n  lodepng_error_text: given the numerical error code, it returns a description\n  of the error in English as a string.\n\n  Check the implementation of lodepng_error_text to see the meaning of each code.\n\n\n  8. chunks and PNG editing\n  -------------------------\n\n  If you want to add extra chunks to a PNG you encode, or use LodePNG for a PNG\n  editor that should follow the rules about handling of unknown chunks, or if your\n  program is able to read other types of chunks than the ones handled by LodePNG,\n  then that's possible with the chunk functions of LodePNG.\n\n  A PNG chunk has the following layout:\n\n  4 bytes length\n  4 bytes type name\n  length bytes data\n  4 bytes CRC\n\n  8.1. iterating through chunks\n  -----------------------------\n\n  If you have a buffer containing the PNG image data, then the first chunk (the\n  IHDR chunk) starts at byte number 8 of that buffer. The first 8 bytes are the\n  signature of the PNG and are not part of a chunk. But if you start at byte 8\n  then you have a chunk, and can check the following things of it.\n\n  NOTE: none of these functions check for memory buffer boundaries. To avoid\n  exploits, always make sure the buffer contains all the data of the chunks.\n  When using lodepng_chunk_next, make sure the returned value is within the\n  allocated memory.\n\n  unsigned lodepng_chunk_length(const unsigned char* chunk):\n\n  Get the length of the chunk's data. The total chunk length is this length + 12.\n\n  void lodepng_chunk_type(char type[5], const unsigned char* chunk):\n  unsigned char lodepng_chunk_type_equals(const unsigned char* chunk, const char* type):\n\n  Get the type of the chunk or compare if it's a certain type\n\n  unsigned char lodepng_chunk_critical(const unsigned char* chunk):\n  unsigned char lodepng_chunk_private(const unsigned char* chunk):\n  unsigned char lodepng_chunk_safetocopy(const unsigned char* chunk):\n\n  Check if the chunk is critical in the PNG standard (only IHDR, PLTE, IDAT and IEND are).\n  Check if the chunk is private (public chunks are part of the standard, private ones not).\n  Check if the chunk is safe to copy. If it's not, then, when modifying data in a critical\n  chunk, unsafe to copy chunks of the old image may NOT be saved in the new one if your\n  program doesn't handle that type of unknown chunk.\n\n  unsigned char* lodepng_chunk_data(unsigned char* chunk):\n  const unsigned char* lodepng_chunk_data_const(const unsigned char* chunk):\n\n  Get a pointer to the start of the data of the chunk.\n\n  unsigned lodepng_chunk_check_crc(const unsigned char* chunk):\n  void lodepng_chunk_generate_crc(unsigned char* chunk):\n\n  Check if the crc is correct or generate a correct one.\n\n  unsigned char* lodepng_chunk_next(unsigned char* chunk):\n  const unsigned char* lodepng_chunk_next_const(const unsigned char* chunk):\n\n  Iterate to the next chunk. This works if you have a buffer with consecutive chunks. Note that these\n  functions do no boundary checking of the allocated data whatsoever, so make sure there is enough\n  data available in the buffer to be able to go to the next chunk.\n\n  unsigned lodepng_chunk_append(unsigned char** out, size_t* outlength, const unsigned char* chunk):\n  unsigned lodepng_chunk_create(unsigned char** out, size_t* outlength, unsigned length,\n  const char* type, const unsigned char* data):\n\n  These functions are used to create new chunks that are appended to the data in *out that has\n  length *outlength. The append function appends an existing chunk to the new data. The create\n  function creates a new chunk with the given parameters and appends it. Type is the 4-letter\n  name of the chunk.\n\n  8.2. chunks in info_png\n  -----------------------\n\n  The LodePNGInfo struct contains fields with the unknown chunk in it. It has 3\n  buffers (each with size) to contain 3 types of unknown chunks:\n  the ones that come before the PLTE chunk, the ones that come between the PLTE\n  and the IDAT chunks, and the ones that come after the IDAT chunks.\n  It's necessary to make the distionction between these 3 cases because the PNG\n  standard forces to keep the ordering of unknown chunks compared to the critical\n  chunks, but does not force any other ordering rules.\n\n  info_png.unknown_chunks_data[0] is the chunks before PLTE\n  info_png.unknown_chunks_data[1] is the chunks after PLTE, before IDAT\n  info_png.unknown_chunks_data[2] is the chunks after IDAT\n\n  The chunks in these 3 buffers can be iterated through and read by using the same\n  way described in the previous subchapter.\n\n  When using the decoder to decode a PNG, you can make it store all unknown chunks\n  if you set the option settings.remember_unknown_chunks to 1. By default, this\n  option is off (0).\n\n  The encoder will always encode unknown chunks that are stored in the info_png.\n  If you need it to add a particular chunk that isn't known by LodePNG, you can\n  use lodepng_chunk_append or lodepng_chunk_create to the chunk data in\n  info_png.unknown_chunks_data[x].\n\n  Chunks that are known by LodePNG should not be added in that way. E.g. to make\n  LodePNG add a bKGD chunk, set background_defined to true and add the correct\n  parameters there instead.\n\n\n  9. compiler support\n  -------------------\n\n  No libraries other than the current standard C library are needed to compile\n  LodePNG. For the C++ version, only the standard C++ library is needed on top.\n  Add the files lodepng.c(pp) and lodepng.h to your project, include\n  lodepng.h where needed, and your program can read/write PNG files.\n\n  It is compatible with C90 and up, and C++03 and up.\n\n  If performance is important, use optimization when compiling! For both the\n  encoder and decoder, this makes a large difference.\n\n  Make sure that LodePNG is compiled with the same compiler of the same version\n  and with the same settings as the rest of the program, or the interfaces with\n  std::vectors and std::strings in C++ can be incompatible.\n\n  CHAR_BITS must be 8 or higher, because LodePNG uses unsigned chars for octets.\n\n  *) gcc and g++\n\n  LodePNG is developed in gcc so this compiler is natively supported. It gives no\n  warnings with compiler options \"-Wall -Wextra -pedantic -ansi\", with gcc and g++\n  version 4.7.1 on Linux, 32-bit and 64-bit.\n\n  *) Clang\n\n  Fully supported and warning-free.\n\n  *) Mingw\n\n  The Mingw compiler (a port of gcc for Windows) should be fully supported by\n  LodePNG.\n\n  *) Visual Studio and Visual C++ Express Edition\n\n  LodePNG should be warning-free with warning level W4. Two warnings were disabled\n  with pragmas though: warning 4244 about implicit conversions, and warning 4996\n  where it wants to use a non-standard function fopen_s instead of the standard C\n  fopen.\n\n  Visual Studio may want \"stdafx.h\" files to be included in each source file and\n  give an error \"unexpected end of file while looking for precompiled header\".\n  This is not standard C++ and will not be added to the stock LodePNG. You can\n  disable it for lodepng.cpp only by right clicking it, Properties, C/C++,\n  Precompiled Headers, and set it to Not Using Precompiled Headers there.\n\n  NOTE: Modern versions of VS should be fully supported, but old versions, e.g.\n  VS6, are not guaranteed to work.\n\n  *) Compilers on Macintosh\n\n  LodePNG has been reported to work both with gcc and LLVM for Macintosh, both for\n  C and C++.\n\n  *) Other Compilers\n\n  If you encounter problems on any compilers, feel free to let me know and I may\n  try to fix it if the compiler is modern and standards complient.\n\n\n  10. examples\n  ------------\n\n  This decoder example shows the most basic usage of LodePNG. More complex\n  examples can be found on the LodePNG website.\n\n  10.1. decoder C++ example\n  -------------------------\n\n  #include \"lodepng.h\"\n  #include <iostream>\n\n  int main(int argc, char *argv[])\n  {\n  const char* filename = argc > 1 ? argv[1] : \"test.png\";\n\n  //load and decode\n  std::vector<unsigned char> image;\n  unsigned width, height;\n  unsigned error = lodepng::decode(image, width, height, filename);\n\n  //if there's an error, display it\n  if(error) std::cout << \"decoder error \" << error << \": \" << lodepng_error_text(error) << std::endl;\n\n  //the pixels are now in the vector \"image\", 4 bytes per pixel, ordered RGBARGBA..., use it as texture, draw it, ...\n  }\n\n  10.2. decoder C example\n  -----------------------\n\n  #include \"lodepng.h\"\n\n  int main(int argc, char *argv[])\n  {\n  unsigned error;\n  unsigned char* image;\n  size_t width, height;\n  const char* filename = argc > 1 ? argv[1] : \"test.png\";\n\n  error = lodepng_decode32_file(&image, &width, &height, filename);\n\n  if(error) printf(\"decoder error %u: %s\\n\", error, lodepng_error_text(error));\n\n  / * use image here * /\n\n  free(image);\n  return 0;\n  }\n\n  11. state settings reference\n  ----------------------------\n\n  A quick reference of some settings to set on the LodePNGState\n\n  For decoding:\n\n  state.decoder.zlibsettings.ignore_adler32: ignore ADLER32 checksums\n  state.decoder.zlibsettings.custom_...: use custom inflate function\n  state.decoder.ignore_crc: ignore CRC checksums\n  state.decoder.ignore_critical: ignore unknown critical chunks\n  state.decoder.ignore_end: ignore missing IEND chunk. May fail if this corruption causes other errors\n  state.decoder.color_convert: convert internal PNG color to chosen one\n  state.decoder.read_text_chunks: whether to read in text metadata chunks\n  state.decoder.remember_unknown_chunks: whether to read in unknown chunks\n  state.info_raw.colortype: desired color type for decoded image\n  state.info_raw.bitdepth: desired bit depth for decoded image\n  state.info_raw....: more color settings, see struct LodePNGColorMode\n  state.info_png....: no settings for decoder but ouput, see struct LodePNGInfo\n\n  For encoding:\n\n  state.encoder.zlibsettings.btype: disable compression by setting it to 0\n  state.encoder.zlibsettings.use_lz77: use LZ77 in compression\n  state.encoder.zlibsettings.windowsize: tweak LZ77 windowsize\n  state.encoder.zlibsettings.minmatch: tweak min LZ77 length to match\n  state.encoder.zlibsettings.nicematch: tweak LZ77 match where to stop searching\n  state.encoder.zlibsettings.lazymatching: try one more LZ77 matching\n  state.encoder.zlibsettings.custom_...: use custom deflate function\n  state.encoder.auto_convert: choose optimal PNG color type, if 0 uses info_png\n  state.encoder.filter_palette_zero: PNG filter strategy for palette\n  state.encoder.filter_strategy: PNG filter strategy to encode with\n  state.encoder.force_palette: add palette even if not encoding to one\n  state.encoder.add_id: add LodePNG identifier and version as a text chunk\n  state.encoder.text_compression: use compressed text chunks for metadata\n  state.info_raw.colortype: color type of raw input image you provide\n  state.info_raw.bitdepth: bit depth of raw input image you provide\n  state.info_raw: more color settings, see struct LodePNGColorMode\n  state.info_png.color.colortype: desired color type if auto_convert is false\n  state.info_png.color.bitdepth: desired bit depth if auto_convert is false\n  state.info_png.color....: more color settings, see struct LodePNGColorMode\n  state.info_png....: more PNG related settings, see struct LodePNGInfo\n\n\n  12. changes\n  -----------\n\n  The version number of LodePNG is the date of the change given in the format\n  yyyymmdd.\n\n  Some changes aren't backwards compatible. Those are indicated with a (!)\n  symbol.\n\n  *) 19 aug 2018 (!): fixed color mode bKGD is encoded with and made it use palette\n  index in case of palette.\n  *) 10 aug 2018 (!): added support for gAMA, cHRM, sRGB and iCCP chunks. This\n  change is backwards compatible unless you relied on unknown_chunks for those.\n  *) 11 jun 2018: less restrictive check for pixel size integer overflow\n  *) 14 jan 2018: allow optionally ignoring a few more recoverable errors\n  *) 17 sep 2017: fix memory leak for some encoder input error cases\n  *) 27 nov 2016: grey+alpha auto color model detection bugfix\n  *) 18 apr 2016: Changed qsort to custom stable sort (for platforms w/o qsort).\n  *) 09 apr 2016: Fixed colorkey usage detection, and better file loading (within\n  the limits of pure C90).\n  *) 08 dec 2015: Made load_file function return error if file can't be opened.\n  *) 24 okt 2015: Bugfix with decoding to palette output.\n  *) 18 apr 2015: Boundary PM instead of just package-merge for faster encoding.\n  *) 23 aug 2014: Reduced needless memory usage of decoder.\n  *) 28 jun 2014: Removed fix_png setting, always support palette OOB for\n  simplicity. Made ColorProfile public.\n  *) 09 jun 2014: Faster encoder by fixing hash bug and more zeros optimization.\n  *) 22 dec 2013: Power of two windowsize required for optimization.\n  *) 15 apr 2013: Fixed bug with LAC_ALPHA and color key.\n  *) 25 mar 2013: Added an optional feature to ignore some PNG errors (fix_png).\n  *) 11 mar 2013 (!): Bugfix with custom free. Changed from \"my\" to \"lodepng_\"\n  prefix for the custom allocators and made it possible with a new #define to\n  use custom ones in your project without needing to change lodepng's code.\n  *) 28 jan 2013: Bugfix with color key.\n  *) 27 okt 2012: Tweaks in text chunk keyword length error handling.\n  *) 8 okt 2012 (!): Added new filter strategy (entropy) and new auto color mode.\n  (no palette). Better deflate tree encoding. New compression tweak settings.\n  Faster color conversions while decoding. Some internal cleanups.\n  *) 23 sep 2012: Reduced warnings in Visual Studio a little bit.\n  *) 1 sep 2012 (!): Removed #define's for giving custom (de)compression functions\n  and made it work with function pointers instead.\n  *) 23 jun 2012: Added more filter strategies. Made it easier to use custom alloc\n  and free functions and toggle #defines from compiler flags. Small fixes.\n  *) 6 may 2012 (!): Made plugging in custom zlib/deflate functions more flexible.\n  *) 22 apr 2012 (!): Made interface more consistent, renaming a lot. Removed\n  redundant C++ codec classes. Reduced amount of structs. Everything changed,\n  but it is cleaner now imho and functionality remains the same. Also fixed\n  several bugs and shrunk the implementation code. Made new samples.\n  *) 6 nov 2011 (!): By default, the encoder now automatically chooses the best\n  PNG color model and bit depth, based on the amount and type of colors of the\n  raw image. For this, autoLeaveOutAlphaChannel replaced by auto_choose_color.\n  *) 9 okt 2011: simpler hash chain implementation for the encoder.\n  *) 8 sep 2011: lz77 encoder lazy matching instead of greedy matching.\n  *) 23 aug 2011: tweaked the zlib compression parameters after benchmarking.\n  A bug with the PNG filtertype heuristic was fixed, so that it chooses much\n  better ones (it's quite significant). A setting to do an experimental, slow,\n  brute force search for PNG filter types is added.\n  *) 17 aug 2011 (!): changed some C zlib related function names.\n  *) 16 aug 2011: made the code less wide (max 120 characters per line).\n  *) 17 apr 2011: code cleanup. Bugfixes. Convert low to 16-bit per sample colors.\n  *) 21 feb 2011: fixed compiling for C90. Fixed compiling with sections disabled.\n  *) 11 dec 2010: encoding is made faster, based on suggestion by Peter Eastman\n  to optimize long sequences of zeros.\n  *) 13 nov 2010: added LodePNG_InfoColor_hasPaletteAlpha and\n  LodePNG_InfoColor_canHaveAlpha functions for convenience.\n  *) 7 nov 2010: added LodePNG_error_text function to get error code description.\n  *) 30 okt 2010: made decoding slightly faster\n  *) 26 okt 2010: (!) changed some C function and struct names (more consistent).\n  Reorganized the documentation and the declaration order in the header.\n  *) 08 aug 2010: only changed some comments and external samples.\n  *) 05 jul 2010: fixed bug thanks to warnings in the new gcc version.\n  *) 14 mar 2010: fixed bug where too much memory was allocated for char buffers.\n  *) 02 sep 2008: fixed bug where it could create empty tree that linux apps could\n  read by ignoring the problem but windows apps couldn't.\n  *) 06 jun 2008: added more error checks for out of memory cases.\n  *) 26 apr 2008: added a few more checks here and there to ensure more safety.\n  *) 06 mar 2008: crash with encoding of strings fixed\n  *) 02 feb 2008: support for international text chunks added (iTXt)\n  *) 23 jan 2008: small cleanups, and #defines to divide code in sections\n  *) 20 jan 2008: support for unknown chunks allowing using LodePNG for an editor.\n  *) 18 jan 2008: support for tIME and pHYs chunks added to encoder and decoder.\n  *) 17 jan 2008: ability to encode and decode compressed zTXt chunks added\n  Also various fixes, such as in the deflate and the padding bits code.\n  *) 13 jan 2008: Added ability to encode Adam7-interlaced images. Improved\n  filtering code of encoder.\n  *) 07 jan 2008: (!) changed LodePNG to use ISO C90 instead of C++. A\n  C++ wrapper around this provides an interface almost identical to before.\n  Having LodePNG be pure ISO C90 makes it more portable. The C and C++ code\n  are together in these files but it works both for C and C++ compilers.\n  *) 29 dec 2007: (!) changed most integer types to unsigned int + other tweaks\n  *) 30 aug 2007: bug fixed which makes this Borland C++ compatible\n  *) 09 aug 2007: some VS2005 warnings removed again\n  *) 21 jul 2007: deflate code placed in new namespace separate from zlib code\n  *) 08 jun 2007: fixed bug with 2- and 4-bit color, and small interlaced images\n  *) 04 jun 2007: improved support for Visual Studio 2005: crash with accessing\n  invalid std::vector element [0] fixed, and level 3 and 4 warnings removed\n  *) 02 jun 2007: made the encoder add a tag with version by default\n  *) 27 may 2007: zlib and png code separated (but still in the same file),\n  simple encoder/decoder functions added for more simple usage cases\n  *) 19 may 2007: minor fixes, some code cleaning, new error added (error 69),\n  moved some examples from here to lodepng_examples.cpp\n  *) 12 may 2007: palette decoding bug fixed\n  *) 24 apr 2007: changed the license from BSD to the zlib license\n  *) 11 mar 2007: very simple addition: ability to encode bKGD chunks.\n  *) 04 mar 2007: (!) tEXt chunk related fixes, and support for encoding\n  palettized PNG images. Plus little interface change with palette and texts.\n  *) 03 mar 2007: Made it encode dynamic Huffman shorter with repeat codes.\n  Fixed a bug where the end code of a block had length 0 in the Huffman tree.\n  *) 26 feb 2007: Huffman compression with dynamic trees (BTYPE 2) now implemented\n  and supported by the encoder, resulting in smaller PNGs at the output.\n  *) 27 jan 2007: Made the Adler-32 test faster so that a timewaste is gone.\n  *) 24 jan 2007: gave encoder an error interface. Added color conversion from any\n  greyscale type to 8-bit greyscale with or without alpha.\n  *) 21 jan 2007: (!) Totally changed the interface. It allows more color types\n  to convert to and is more uniform. See the manual for how it works now.\n  *) 07 jan 2007: Some cleanup & fixes, and a few changes over the last days:\n  encode/decode custom tEXt chunks, separate classes for zlib & deflate, and\n  at last made the decoder give errors for incorrect Adler32 or Crc.\n  *) 01 jan 2007: Fixed bug with encoding PNGs with less than 8 bits per channel.\n  *) 29 dec 2006: Added support for encoding images without alpha channel, and\n  cleaned out code as well as making certain parts faster.\n  *) 28 dec 2006: Added \"Settings\" to the encoder.\n  *) 26 dec 2006: The encoder now does LZ77 encoding and produces much smaller files now.\n  Removed some code duplication in the decoder. Fixed little bug in an example.\n  *) 09 dec 2006: (!) Placed output parameters of public functions as first parameter.\n  Fixed a bug of the decoder with 16-bit per color.\n  *) 15 okt 2006: Changed documentation structure\n  *) 09 okt 2006: Encoder class added. It encodes a valid PNG image from the\n  given image buffer, however for now it's not compressed.\n  *) 08 sep 2006: (!) Changed to interface with a Decoder class\n  *) 30 jul 2006: (!) LodePNG_InfoPng , width and height are now retrieved in different\n  way. Renamed decodePNG to decodePNGGeneric.\n  *) 29 jul 2006: (!) Changed the interface: image info is now returned as a\n  struct of type LodePNG::LodePNG_Info, instead of a vector, which was a bit clumsy.\n  *) 28 jul 2006: Cleaned the code and added new error checks.\n  Corrected terminology \"deflate\" into \"inflate\".\n  *) 23 jun 2006: Added SDL example in the documentation in the header, this\n  example allows easy debugging by displaying the PNG and its transparency.\n  *) 22 jun 2006: (!) Changed way to obtain error value. Added\n  loadFile function for convenience. Made decodePNG32 faster.\n  *) 21 jun 2006: (!) Changed type of info vector to unsigned.\n  Changed position of palette in info vector. Fixed an important bug that\n  happened on PNGs with an uncompressed block.\n  *) 16 jun 2006: Internally changed unsigned into unsigned where\n  needed, and performed some optimizations.\n  *) 07 jun 2006: (!) Renamed functions to decodePNG and placed them\n  in LodePNG namespace. Changed the order of the parameters. Rewrote the\n  documentation in the header. Renamed files to lodepng.cpp and lodepng.h\n  *) 22 apr 2006: Optimized and improved some code\n  *) 07 sep 2005: (!) Changed to std::vector interface\n  *) 12 aug 2005: Initial release (C++, decoder only)\n\n\n  13. contact information\n  -----------------------\n\n  Feel free to contact me with suggestions, problems, comments, ... concerning\n  LodePNG. If you encounter a PNG image that doesn't work properly with this\n  decoder, feel free to send it and I'll use it to find and fix the problem.\n\n  My email address is (puzzle the account and domain together with an @ symbol):\n  Domain: gmail dot com.\n  Account: lode dot vandevenne.\n\n\n  Copyright (c) 2005-2018 Lode Vandevenne\n  */\n"
  },
  {
    "path": "PinBox/PinBox/include/yuv_rgb.h",
    "content": "// Copyright 2016 Adrien Descamps\n// Distributed under BSD 3-Clause License\n\n// Provide optimized functions to convert images from 8bits yuv420 to rgb24 format\n\n// There are a few slightly different variations of the YCbCr color space with different parameters that \n// change the conversion matrix.\n// The three most common YCbCr color space, defined by BT.601, BT.709 and JPEG standard are implemented here.\n// See the respective standards for details\n// The matrix values used are derived from http://www.equasys.de/colorconversion.html\n\n// YUV420 is stored as three separate channels, with U and V (Cb and Cr) subsampled by a 2 factor\n// For conversion from yuv to rgb, no interpolation is done, and the same UV value are used for 4 rgb pixels. This \n// is suboptimal for image quality, but by far the fastest method.\n\n// For all methods, width and height should be even, if not, the last row/column of the result image won't be affected.\n// For sse methods, if the width if not divisable by 32, the last (width%32) pixels of each line won't be affected.\n\n#include <stdint.h>\n\ntypedef enum\n{\n\tYCBCR_JPEG,\n\tYCBCR_601,\n\tYCBCR_709\n} YCbCrType;\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n// yuv to rgb, standard c implementation\nvoid yuv420_rgb24_std(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *y, const uint8_t *u, const uint8_t *v, uint32_t y_stride, uint32_t uv_stride, \n\tuint8_t *rgb, uint32_t rgb_stride, \n\tYCbCrType yuv_type);\n\n// yuv to rgb, sse implementation\n// pointers must be 16 byte aligned, and strides must be divisable by 16\nvoid yuv420_rgb24_sse(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *y, const uint8_t *u, const uint8_t *v, uint32_t y_stride, uint32_t uv_stride, \n\tuint8_t *rgb, uint32_t rgb_stride, \n\tYCbCrType yuv_type);\n\n// yuv to rgb, sse implementation\n// pointers do not need to be 16 byte aligned\nvoid yuv420_rgb24_sseu(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *y, const uint8_t *u, const uint8_t *v, uint32_t y_stride, uint32_t uv_stride, \n\tuint8_t *rgb, uint32_t rgb_stride, \n\tYCbCrType yuv_type);\n\n\n\n// rgb to yuv, standard c implementation\nvoid rgb24_yuv420_std(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *rgb, uint32_t rgb_stride, \n\tuint8_t *y, uint8_t *u, uint8_t *v, uint32_t y_stride, uint32_t uv_stride, \n\tYCbCrType yuv_type);\n\n// rgb to yuv, sse implementation\n// pointers must be 16 byte aligned, and strides must be divisible by 16\nvoid rgb24_yuv420_sse(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *rgb, uint32_t rgb_stride, \n\tuint8_t *y, uint8_t *u, uint8_t *v, uint32_t y_stride, uint32_t uv_stride, \n\tYCbCrType yuv_type);\n\n// rgb to yuv, sse implementation\n// pointers do not need to be 16 byte aligned\nvoid rgb24_yuv420_sseu(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *rgb, uint32_t rgb_stride, \n\tuint8_t *y, uint8_t *u, uint8_t *v, uint32_t y_stride, uint32_t uv_stride, \n\tYCbCrType yuv_type);\n\n#ifdef __cplusplus\n}\n#endif\n"
  },
  {
    "path": "PinBox/PinBox/source/ConfigManager.cpp",
    "content": "#include \"ConfigManager.h\"\r\n#define CONFIG_FILE_NAME \"pinbox/pinbox.cfg\"\r\n\r\nstatic ConfigManager* ref;\r\n\r\nConfigManager* ConfigManager::Get()\r\n{\r\n\tif (ref == nullptr) ref = new ConfigManager();\r\n\treturn ref;\r\n}\r\n\r\nConfigManager::ConfigManager()\r\n{\r\n}\r\n\r\n\r\nbool ConfigManager::shouldCreateNewConfigFile()\r\n{\r\n\tif (!config_read_file(&_config, CONFIG_FILE_NAME)) {\r\n\t\tprintf(\"> Config file was not found.\\n\");\r\n\t\treturn true;\r\n\t}\r\n\tconfig_setting_t *root, *setting;\r\n\troot = config_root_setting(&_config);\r\n\tint version = 0;\r\n\tif (!config_lookup_int(&_config, \"version\", &version)) {\r\n\t\tprintf(\"> Config do not have version.\\n\");\r\n\t\treturn true;\r\n\t}\r\n\tif (version < FORCE_OVERRIDE_VERSION) {\r\n\t\tprintf(\"> Config file was outdate.\\n\");\r\n\t\treturn true;\r\n\t}\r\n\treturn false;\r\n}\r\n\r\nvoid ConfigManager::createNewConfigFile()\r\n{\r\n\tconfig_setting_t *root, *setting;\r\n\tconfig_destroy(&_config);\r\n\tconfig_init(&_config);\r\n\troot = config_root_setting(&_config);\r\n\r\n\t// server config group\r\n\tsetting = config_setting_add(root, \"servers\", CONFIG_TYPE_LIST);\r\n\tsetting = config_setting_add(root, \"last_using_server\", CONFIG_TYPE_INT);\r\n\tconfig_setting_set_int(setting, -1);\r\n\r\n\t// video config\r\n\tsetting = config_setting_add(root, \"video_bit_rate\", CONFIG_TYPE_INT);\r\n\tconfig_setting_set_int(setting, 120000);\r\n\tsetting = config_setting_add(root, \"video_gop\", CONFIG_TYPE_INT);\r\n\tconfig_setting_set_int(setting, 18);\r\n\tsetting = config_setting_add(root, \"video_max_b_frames\", CONFIG_TYPE_INT);\r\n\tconfig_setting_set_int(setting, 2);\r\n\r\n\t// audio\r\n\tsetting = config_setting_add(root, \"audio_bit_rate\", CONFIG_TYPE_INT);\r\n\tconfig_setting_set_int(setting, 48000);\r\n\r\n\t// mode\r\n\tsetting = config_setting_add(root, \"wait_for_sync\", CONFIG_TYPE_BOOL);\r\n\tconfig_setting_set_bool(setting, true);\r\n\r\n\tsetting = config_setting_add(root, \"version\", CONFIG_TYPE_INT);\r\n\tconfig_setting_set_int(setting, FORCE_OVERRIDE_VERSION);\r\n\r\n\tconfig_set_options(&_config,\r\n\t\t(CONFIG_OPTION_SEMICOLON_SEPARATORS\r\n\t\t\t| CONFIG_OPTION_COLON_ASSIGNMENT_FOR_GROUPS\r\n\t\t\t| CONFIG_OPTION_OPEN_BRACE_ON_SEPARATE_LINE));\r\n\tconfig_write_file(&_config, CONFIG_FILE_NAME);\r\n\r\n\t// default values\r\n\tvideoBitRate = 120000;\r\n\tvideoGOP = 18;\r\n\tvideoMaxBFrames = 2;\r\n\taudioBitRate = 48000;\r\n\twaitForSync = true;\r\n\tlastUsingServer = -1;\r\n}\r\n\r\nvoid ConfigManager::loadConfigFile()\r\n{\r\n\tconfig_setting_t *root, *setting;\r\n\troot = config_root_setting(&_config);\r\n\tsetting = config_lookup(&_config, \"servers\");\r\n\tif (setting != NULL)\r\n\t{\r\n\t\tprintf(\"> Load server profiles\\n\");\r\n\t\tint count = config_setting_length(setting), i = 0;\r\n\t\tfor(i = 0; i < count; ++i)\r\n\t\t{\r\n\t\t\tconfig_setting_t* serverCfg = config_setting_get_elem(setting, i);\r\n\t\t\tServerConfig server{};\r\n\t\t\tconst char* ip, * port, *name;\r\n\t\t\tif(!(config_setting_lookup_string(serverCfg, \"ip\", &ip)\r\n\t\t\t\t&& config_setting_lookup_string(serverCfg, \"port\", &port)\r\n\t\t\t\t&& config_setting_lookup_string(serverCfg, \"name\", &name)))\r\n\t\t\t{\r\n\t\t\t\tcontinue;\r\n\t\t\t}\r\n\t\t\tserver.name = std::string(name);\r\n\t\t\tserver.ip = std::string(ip);\r\n\t\t\tserver.port = std::string(port);\r\n\t\t\tservers.push_back(server);\r\n\t\t}\r\n\t}\r\n\tif (!config_lookup_int(&_config, \"last_using_server\", &lastUsingServer)) lastUsingServer = -1;\r\n\r\n\t// video\r\n\tif (!config_lookup_int(&_config, \"video_bit_rate\", &videoBitRate)) videoBitRate = 120000;\r\n\tif (!config_lookup_int(&_config, \"video_gop\", &videoGOP)) videoGOP = 18;\r\n\tif (!config_lookup_int(&_config, \"video_max_b_frames\", &videoMaxBFrames)) videoMaxBFrames = 2;\r\n\r\n\t// audio\r\n\tif (!config_lookup_int(&_config, \"audio_bit_rate\", &audioBitRate)) audioBitRate = 48000;\r\n\r\n\t// mode\r\n\tint waitForReceived = 0;\r\n\tif (!config_lookup_bool(&_config, \"wait_for_sync\", &waitForReceived)) {\r\n\t\twaitForSync = true;\r\n\t}\r\n\telse waitForSync = waitForReceived;\r\n}\r\n\r\n\r\nvoid ConfigManager::InitConfig()\r\n{\r\n\tconfig_init(&_config);\r\n\tconfig_set_options(&_config,\r\n\t\t(CONFIG_OPTION_SEMICOLON_SEPARATORS\r\n\t\t\t| CONFIG_OPTION_COLON_ASSIGNMENT_FOR_GROUPS\r\n\t\t\t| CONFIG_OPTION_OPEN_BRACE_ON_SEPARATE_LINE));\r\n\tprintf(\"----------------------\\nInitialize Config\\n----------------------\\n\");\r\n\tif(shouldCreateNewConfigFile())\r\n\t{\r\n\t\tprintf(\"> Create new config file\\n\");\r\n\t\tcreateNewConfigFile();\r\n\t}else\r\n\t{\r\n\t\tprintf(\"> Load config file\\n\");\r\n\t\tloadConfigFile();\r\n\t}\r\n}\r\n\r\nvoid ConfigManager::Save()\r\n{\r\n\tconfig_setting_t *root, *setting, *element;\r\n\troot = config_root_setting(&_config);\r\n\r\n\t// reset server list\r\n\tsetting = config_lookup(&_config, \"servers\");\r\n\tif (setting != NULL) {\r\n\t\tint profileCount = config_setting_length(setting);\r\n\t\twhile (profileCount >= 0)\r\n\t\t{\r\n\t\t\tint ret = config_setting_remove_elem(setting, 0);\r\n\t\t\tif (ret != CONFIG_TRUE)\r\n\t\t\t{\r\n\t\t\t\tprintf(\"> Failed to detele profile at: %d\\n\", profileCount);\r\n\t\t\t}\r\n\t\t\t--profileCount;\r\n\t\t}\r\n\t}\r\n\tfor (auto& i : servers)\r\n\t{\r\n\t\tconfig_setting_t* server = config_setting_add(setting, NULL, CONFIG_TYPE_GROUP);\r\n\t\tif(server != nullptr)\r\n\t\t{\r\n\t\t\tprintf(\"> Profile: %s %s:%s.\\n\", i.name.c_str(), i.ip.c_str(), i.port.c_str());\r\n\t\t\telement = config_setting_add(server, \"ip\", CONFIG_TYPE_STRING);\r\n\t\t\tconfig_setting_set_string(element, i.ip.c_str());\r\n\t\t\telement = config_setting_add(server, \"port\", CONFIG_TYPE_STRING);\r\n\t\t\tconfig_setting_set_string(element, i.port.c_str());\r\n\t\t\telement = config_setting_add(server, \"name\", CONFIG_TYPE_STRING);\r\n\t\t\tconfig_setting_set_string(element, i.name.c_str());\r\n\t\t}\r\n\t}\r\n\tsetting = config_setting_get_member(root, \"last_using_server\");\r\n\tif (!setting) setting = config_setting_add(root, \"last_using_server\", CONFIG_TYPE_INT);\r\n\tconfig_setting_set_int(setting, lastUsingServer);\r\n\r\n\tprintf(\"> Write video config.\\n\");\r\n\t// video\r\n\tsetting = config_setting_get_member(root, \"video_bit_rate\");\r\n\tif (!setting) setting = config_setting_add(root, \"video_bit_rate\", CONFIG_TYPE_INT);\r\n\tconfig_setting_set_int(setting, videoBitRate);\r\n\r\n\tsetting = config_setting_get_member(root, \"video_gop\");\r\n\tif (!setting) setting = config_setting_add(root, \"video_gop\", CONFIG_TYPE_INT);\r\n\tconfig_setting_set_int(setting, videoGOP);\r\n\r\n\tsetting = config_setting_get_member(root, \"video_max_b_frames\");\r\n\tif (!setting) setting = config_setting_add(root, \"video_max_b_frames\", CONFIG_TYPE_INT);\r\n\tconfig_setting_set_int(setting, videoMaxBFrames);\r\n\r\n\tprintf(\"> Write audio config.\\n\");\r\n\t// audio\r\n\tsetting = config_setting_get_member(root, \"audio_bit_rate\");\r\n\tif (!setting) setting = config_setting_add(root, \"audio_bit_rate\", CONFIG_TYPE_INT);\r\n\tconfig_setting_set_int(setting, audioBitRate);\r\n\r\n\t// mode\r\n\tsetting = config_setting_get_member(root, \"wait_for_sync\");\r\n\tif (!setting) setting = config_setting_add(root, \"wait_for_sync\", CONFIG_TYPE_BOOL);\r\n\tconfig_setting_set_bool(setting, waitForSync);\r\n\r\n\tsetting = config_setting_get_member(root, \"version\");\r\n\tif (!setting) setting = config_setting_add(root, \"version\", CONFIG_TYPE_INT);\r\n\tconfig_setting_set_int(setting, FORCE_OVERRIDE_VERSION);\r\n\r\n\t// write file\r\n\tint ret = config_write_file(&_config, CONFIG_FILE_NAME);\r\n\tprintf(\"> Save result: %d\\n\", ret);\r\n}\r\n\r\nvoid ConfigManager::Destroy()\r\n{\r\n\tconfig_destroy(&_config);\r\n}\r\n"
  },
  {
    "path": "PinBox/PinBox/source/Mutex.cpp",
    "content": "#include \"Mutex.h\"\r\n\r\n\r\n\r\nMutex::Mutex()\r\n{\r\n\t_isLocked = false;\r\n\tLightLock_Init(&_handler);\r\n}\r\n\r\n\r\nMutex::~Mutex()\r\n{\r\n\tif(_isLocked) LightLock_Unlock(&_handler);\r\n}\r\n\r\nvoid Mutex::Lock()\r\n{\r\n\tLightLock_Lock(&_handler);\r\n}\r\n\r\nvoid Mutex::TryLock()\r\n{\r\n\tif(!LightLock_TryLock(&_handler)) _isLocked = true;\r\n}\r\n\r\nvoid Mutex::Unlock()\r\n{\r\n\tLightLock_Unlock(&_handler);\r\n\t_isLocked = false;\r\n}\r\n"
  },
  {
    "path": "PinBox/PinBox/source/PPAudio.cpp",
    "content": "#include \"PPAudio.h\"\n#include <cstring>\n#include <cstdio>\n\n#define SAMPLERATE 22050\n#define SAMPLESPERBUF 1152\n#define BYTESPERSAMPLE 4\n/* Channel to play audio on */\n#define CHANNEL 0x00\n\n//---------------------------------------------------------------------\nstatic PPAudio* mInstance = nullptr;\nPPAudio* PPAudio::Get()\n{\n\tif (mInstance == nullptr)\n\t{\n\t\tmInstance = new PPAudio();\n\t}\n\treturn mInstance;\n}\n\nvoid PPAudio::AudioInit()\n{\n\tif (_initialized) return;\n\n\tndspInit();\n\n\t// buf temporary\n\tu8 * temp = (u8 *)linearAlloc(SAMPLESPERBUF * BYTESPERSAMPLE * MAX_AUDIO_BUF);\n\t// fail to alloc tmp memory\n\tif(temp == nullptr)\n\t{\n\t\tndspExit();\n\t\treturn;\n\t}\n\n\tmemset(temp, 0, SAMPLESPERBUF * BYTESPERSAMPLE * MAX_AUDIO_BUF);\n\tDSP_FlushDataCache(temp, SAMPLESPERBUF * BYTESPERSAMPLE * MAX_AUDIO_BUF);\n\n\tndspChnWaveBufClear(CHANNEL);\n\tndspChnReset(CHANNEL);\n\tndspSetOutputMode(NDSP_OUTPUT_STEREO);\n\tndspChnSetInterp(CHANNEL, NDSP_INTERP_POLYPHASE);\n\tndspChnSetRate(CHANNEL, SAMPLERATE);\n\tndspChnSetFormat(CHANNEL, NDSP_FORMAT_STEREO_PCM16);\n\n\t// set mix\n\tfloat mix[12];\n\tmemset(mix, 0, sizeof(mix));\n\tmix[0] = 1.0;\n\tmix[1] = 1.0;\n\tndspChnSetMix(0, mix);\n\n\tmemset(_waveBuf, 0, sizeof(_waveBuf));\n\t_waveBuf[0].data_vaddr = temp;\n\t_waveBuf[0].nsamples = SAMPLESPERBUF;\n\t_waveBuf[0].status = NDSP_WBUF_DONE;\n\t_waveBuf[1].data_vaddr = temp + (SAMPLESPERBUF * BYTESPERSAMPLE);\n\t_waveBuf[1].nsamples = SAMPLESPERBUF;\n\t_waveBuf[1].status = NDSP_WBUF_DONE;\n\n\t_initialized = true;\n}\n\nvoid PPAudio::AudioExit()\n{\n\tif (!_initialized) return;\n\t_initialized = false;\n\n\tndspChnWaveBufClear(CHANNEL);\n\tndspExit();\n\n\tlinearFree(_waveBuf[0].data_vaddr);\n\t_waveBuf[0].data_vaddr = NULL;\n}\n\nvoid PPAudio::FillBuffer(u8* buffer, u32 size)\n{\n\tif (!_initialized) return;\n\n\tsize_t nextBuf = _nextBuf;\n\twhile (_waveBuf[nextBuf].status != NDSP_WBUF_DONE)\n\t{\n\t\tsvcSleepThread(25* 1000ULL);\n\t}\n\n\tmemcpy(_waveBuf[nextBuf].data_pcm16, buffer, SAMPLESPERBUF * BYTESPERSAMPLE);\n\tDSP_FlushDataCache(_waveBuf[nextBuf].data_pcm16, SAMPLESPERBUF * BYTESPERSAMPLE);\n\n\t_waveBuf[nextBuf].offset = 0;\n\t_waveBuf[nextBuf].status = NDSP_WBUF_QUEUED;\n\tndspChnWaveBufAdd(CHANNEL, &_waveBuf[nextBuf]);\n\n\t_nextBuf = (nextBuf + 1) % MAX_AUDIO_BUF;\n}"
  },
  {
    "path": "PinBox/PinBox/source/PPDecoder.cpp",
    "content": "#include \"PPDecoder.h\"\n#include \"PPAudio.h\"\n\nstatic u8* pRGBBuffer = nullptr;\n\nvolatile bool initialized = false;\n\n\nPPDecoder::PPDecoder()\n{\n}\n\nPPDecoder::~PPDecoder()\n{\n\t\n}\n\nvoid PPDecoder::initDecoder()\n{\n\tif (initialized) return;\n\tinitialized = true;\n\n\tav_log_set_level(AV_LOG_QUIET);\n\tav_register_all();\n\t//-----------------------------------------------------------------\n\t// init video encoder\n\t//-----------------------------------------------------------------\n\tconst AVCodec* videoCodec = avcodec_find_decoder(AV_CODEC_ID_MPEG4);\n\tpVideoContext = avcodec_alloc_context3(videoCodec);\n\tpVideoContext->width = 400;\n\tpVideoContext->height = 240;\n\tpVideoContext->pix_fmt = AV_PIX_FMT_YUV420P;\n\tpVideoContext->thread_count = 4;\n\tpVideoContext->thread_type = FF_THREAD_FRAME;\n\t// Open\n\tint ret = avcodec_open2(pVideoContext, videoCodec, NULL);\n\tif (ret < 0)\n\t{\n\t\tprintf(\"failed to open Video decoder: %d\\n\", ret);\n\t}\n\tpVideoPacket = av_packet_alloc();\n\tpVideoFrame = av_frame_alloc();\n\tinitY2RImageConverter();\n\n\t//-----------------------------------------------------------------\n\t// init audio encoder\n\t//-----------------------------------------------------------------\n\tconst AVCodec *audioCodec = avcodec_find_decoder(AV_CODEC_ID_MP2);\n\tif(!audioCodec)\n\t{\n\t\tprintf(\"[Audio] Codec not found!\\n\");\n\t}\n\tpAudioContext = avcodec_alloc_context3(audioCodec);\n\tpAudioContext->bit_rate = 64000;\n\tpAudioContext->sample_fmt = AV_SAMPLE_FMT_S16;\n\tpAudioContext->request_sample_fmt = AV_SAMPLE_FMT_S16;\n\tpAudioContext->request_channel_layout = AV_CH_LAYOUT_STEREO;\n\tpAudioContext->sample_rate = 22050;\n\tpAudioContext->channels = av_get_channel_layout_nb_channels(AV_CH_LAYOUT_STEREO);;\n\tpAudioContext->channel_layout = AV_CH_LAYOUT_STEREO;\n\tret = avcodec_open2(pAudioContext, audioCodec, NULL);\n\tif(ret < 0)\n\t{\n\t\tprintf(\"[Audio] Failed to open context: %d\\n\", ret);\n\t}\n\tpAudioPacket = av_packet_alloc();\n\tpAudioFrame = av_frame_alloc();\n}\n\nvoid PPDecoder::releaseDecoder()\n{\n\tif (!initialized) return;\n\tinitialized = false;\n\tprintf(\"Release audio/video decoder.\\n\");\n\t// free video\n\tavcodec_free_context(&pVideoContext);\n\tav_frame_free(&pVideoFrame);\n\tav_packet_free(&pVideoPacket);\n\t// free audio\n\tavcodec_free_context(&pAudioContext);\n\tav_frame_free(&pAudioFrame);\n\tav_packet_free(&pAudioPacket);\n\t//---------------------------------------------------\n\tbool is_busy = 0;\n\tY2RU_StopConversion();\n\tY2RU_IsBusyConversion(&is_busy);\n\ty2rExit();\n\n\tlinearFree(pRGBBuffer);\n}\n\nu8* PPDecoder::appendVideoBuffer(u8* buffer, u32 size)\n{\n\tif (!initialized) return nullptr;\n\tif (size <= 0) return nullptr;\n\tpVideoPacket->data = buffer;\n\tpVideoPacket->size = size;\n\treturn decodeVideoStream();\n}\n\nvoid PPDecoder::initY2RImageConverter()\n{\n\t//--------------------------------------------------------------------------------------\n\t// NOTE: code borrow from 3DS video player\n\t// ref: https://github.com/Lectem/3Damnesic/blob/master/source/color_converter.c\n\t//--------------------------------------------------------------------------------------\n\tpDecodeState = new DecodeState();\n\tResult res = 0;\n\tres = y2rInit();\n\tif (res != 0) printf(\"Error when init Y2R\\n\");\n\tres = Y2RU_StopConversion();\n\tif (res != 0) printf(\"Error on Y2RU_StopConversion\\n\");\n\tbool is_busy = 0;\n\tint tries = 0;\n\tdo\n\t{\n\t\tsvcSleepThread(25 * 1000ull);\n\t\tres = Y2RU_StopConversion();\n\t\tif (res != 0) printf(\"Error on Y2RU_StopConversion\\n\");\n\t\tres = Y2RU_IsBusyConversion(&is_busy);\n\t\tif (res != 0) printf(\"Error on Y2RU_IsBusyConversion\\n\");\n\t\ttries += 1;\n\t} while (is_busy && tries < 100);\n\tpDecodeState->y2rParams.input_format = INPUT_YUV420_INDIV_8;\n\tpDecodeState->y2rParams.output_format = OUTPUT_RGB_24;\n\tpDecodeState->y2rParams.rotation = ROTATION_NONE;\n\tpDecodeState->y2rParams.block_alignment = BLOCK_8_BY_8;\n\tpDecodeState->y2rParams.input_line_width = 400;\n\tpDecodeState->y2rParams.input_lines = 240;\n\tif (pDecodeState->y2rParams.input_lines % 8) {\n\t\tpDecodeState->y2rParams.input_lines += 8 - (pDecodeState->y2rParams.input_lines % 8);\n\t}\n\tpDecodeState->y2rParams.standard_coefficient = COEFFICIENT_ITU_R_BT_601;\n\tpDecodeState->y2rParams.unused = 0;\n\tpDecodeState->y2rParams.alpha = 0xFF;\n\tres = Y2RU_SetConversionParams(&pDecodeState->y2rParams);\n\tif (res != 0) printf(\"Error on Y2RU_SetConversionParams\\n\");\n\tres = Y2RU_SetTransferEndInterrupt(true);\n\tif (res != 0) printf(\"Error on Y2RU_SetTransferEndInterrupt\\n\");\n\tpDecodeState->endEvent = 0;\n\tres = Y2RU_GetTransferEndEvent(&pDecodeState->endEvent);\n\tif (res != 0) printf(\"Error on Y2RU_GetTransferEndEvent\\n\");\n}\n\nvoid PPDecoder::convertColor()\n{\n\tResult res;\n\tconst s16 img_w = pDecodeState->y2rParams.input_line_width;\n\tconst s16 img_h = pDecodeState->y2rParams.input_lines;\n\tconst u32 img_size = img_w * img_h;\n\tconst u32 img_w_UV = img_w >> 1;\n\tsize_t src_Y_size = 0;\n\tsize_t src_UV_size = 0;\n\n\tswitch (pDecodeState->y2rParams.input_format)\n\t{\n\tcase INPUT_YUV422_INDIV_8:\n\t\tsrc_Y_size = img_size;\n\t\tsrc_UV_size = img_size / 2;\n\t\tbreak;\n\tcase INPUT_YUV420_INDIV_8:\n\t\tsrc_Y_size = img_size;\n\t\tsrc_UV_size = img_size / 4;\n\t\tbreak;\n\tcase INPUT_YUV422_INDIV_16:\n\t\tsrc_Y_size = img_size * 2;\n\t\tsrc_UV_size = img_size / 2 * 2;\n\t\tbreak;\n\tcase INPUT_YUV420_INDIV_16:\n\t\tsrc_Y_size = img_size * 2;\n\t\tsrc_UV_size = img_size / 4 * 2;\n\t\tbreak;\n\tcase INPUT_YUV422_BATCH:\n\t\tsrc_Y_size = img_size * 2;\n\t\tsrc_UV_size = img_size * 2;\n\t\tbreak;\n\t}\n\n\tu8 *src_Y = (u8*)pVideoFrame->data[0];\n\tu8 *src_U = (u8*)pVideoFrame->data[1];\n\tu8 *src_V = (u8*)pVideoFrame->data[2];\n\tconst s16 src_Y_padding = pVideoFrame->linesize[0] - img_w;\n\tconst s16 src_UV_padding = pVideoFrame->linesize[1] - img_w_UV;\n\n\tY2RU_StopConversion();\n\n\tres = Y2RU_SetSendingY(src_Y, src_Y_size, img_w, src_Y_padding);\n\tif (res != 0) \n\t\tprintf(\"Error on Y2RU_SetSendingY\\n\");\n\tres = Y2RU_SetSendingU(src_U, src_UV_size, img_w_UV, src_UV_padding);\n\tif (res != 0) \n\t\tprintf(\"Error on Y2RU_SetSendingU\\n\");\n\tres = Y2RU_SetSendingV(src_V, src_UV_size, img_w_UV, src_UV_padding);\n\tif (res != 0) \n\t\tprintf(\"Error on Y2RU_SetSendingV\\n\");\n\n\tconst u16 pixSize = 3;\n\tsize_t rgb_size = (512*256) * pixSize;\n\ts16 transfer_unit = 8;\n\ts16 gap = (512 - img_w) * transfer_unit * pixSize;\n\t//TODO: try this with get frame buffer\n\tres = Y2RU_SetReceiving(pRGBBuffer, rgb_size, img_w * transfer_unit * pixSize, gap);\n\tif (res != 0) \n\t\tprintf(\"Error on Y2RU_SetReceiving\\n\");\n\tres = Y2RU_StartConversion();\n\tif (res != 0) \n\t\tprintf(\"Error on Y2RU_StartConversion\\n\");\n\tres = svcWaitSynchronization(pDecodeState->endEvent, 1000000000ull);\n\tif (res != 0) \n\t\tprintf(\"Error on svcWaitSynchronization\\n\");\n}\n\nu8* PPDecoder::decodeVideoStream()\n{\n\tint ret = 0;\n\tret = avcodec_send_packet(pVideoContext, pVideoPacket);\n\tif (ret < 0) return nullptr;\n\tret = avcodec_receive_frame(pVideoContext, pVideoFrame);\n\tif (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) return nullptr;\n\tif (ret < 0) return nullptr;\n\n\t//----------------------------------------------\n\t// decode video frame\n\t//----------------------------------------------\n\tiFrameWidth = pVideoFrame->width;\n\tiFrameHeight =pVideoFrame->height;\n\tif (pRGBBuffer == nullptr) pRGBBuffer = (u8*)linearMemAlign(3 * 512 * 256, 0x80);\n\tconvertColor();\n\n\tav_packet_unref(pVideoPacket);\n\n\treturn pRGBBuffer;\n}\n\nvoid PPDecoder::decodeAudioStream(u8* buffer, u32 size)\n{\n\n\tif (!initialized) return;\n\tif (size == 0) return;\n\n\tint ret = 0;\n\tav_packet_unref(pVideoPacket);\n\tpAudioPacket->data = buffer;\n\tpAudioPacket->size = size;\n\n\tret = avcodec_send_packet(pAudioContext, pAudioPacket);\n\tif (ret < 0) {\n\t\tprintf(\"[Audio] Error submitting the packet to the decoder: %d\\n\", ret);\n\t}\n\telse {\n\t\twhile (ret >= 0) {\n\t\t\tret = avcodec_receive_frame(pAudioContext, pAudioFrame);\n\t\t\tif (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) return;\n\t\t\tif (ret < 0)\n\t\t\t{\n\t\t\t\tprintf(\"[Audio] get frame failed: %d\\n\", ret);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tPPAudio::Get()->FillBuffer(pAudioFrame->extended_data[0], pAudioFrame->linesize[0]);\n\t\t}\n\t}\n\t\n}\n"
  },
  {
    "path": "PinBox/PinBox/source/PPGraphics.cpp",
    "content": "#include \"PPGraphics.h\"\r\n#include \"vshader_shbin.h\"\r\n#include <cstdio>\r\n#include <cstdlib>\r\n#include <3ds/gpu/enums.h>\r\n#include <3ds/gfx.h>\r\n#include <c3d/base.h>\r\n#include <c3d/renderqueue.h>\r\n#include <3ds/gpu/gx.h>\r\n#include <3ds/allocator/linear.h>\r\n#include <3ds/gpu/shaderProgram.h>\r\n#include <citro3d.h>\r\n#include <3ds/font.h>\r\n#include <3ds/util/utf.h>\r\n#include \"lodepng.h\"\r\n\r\n#define IM_ARRAYSIZE(_ARR) ((int)(sizeof(_ARR)/sizeof(*_ARR)))\r\n//---------------------------------------------------------------------\r\n// Graphics Instance\r\n//---------------------------------------------------------------------\r\nstatic PPGraphics* mInstance = nullptr;\r\n\r\nstatic Vector2 mCircleVertex12[12];\r\n\r\n#define TEX_MIN_SIZE 32\r\n//Grabbed from: http://graphics.stanford.edu/~seander/bithacks.html#RoundUpPowerOf2\r\nunsigned int next_pow2(unsigned int v)\r\n{\r\n\tv--;\r\n\tv |= v >> 1;\r\n\tv |= v >> 2;\r\n\tv |= v >> 4;\r\n\tv |= v >> 8;\r\n\tv |= v >> 16;\r\n\tv++;\r\n\treturn v >= TEX_MIN_SIZE ? v : TEX_MIN_SIZE;\r\n}\r\n\r\nPPGraphics* PPGraphics::Get()\r\n{\r\n\tif(mInstance == nullptr)\r\n\t{\r\n\t\tmInstance = new PPGraphics();\r\n\t}\r\n\treturn mInstance;\r\n}\r\n\r\nvoid PPGraphics::GraphicsInit()\r\n{\r\n\t// shared data\r\n\tint arrSize = IM_ARRAYSIZE(mCircleVertex12);\r\n\tfor(int i = 0; i < arrSize; ++i)\r\n\t{\r\n\t\tconst float a = ((float)i * 2 * M_PI) / (float)IM_ARRAYSIZE(mCircleVertex12);\r\n\t\tmCircleVertex12[i] = Vector2{ cosf(a), sinf(a) };\r\n\t}\r\n\r\n\t// init\r\n\tgfxInitDefault();\r\n\tC3D_Init(C3D_DEFAULT_CMDBUF_SIZE);\r\n\r\n\t//---------------------------------------------------------------------\r\n\t// Setup render target\r\n\t//---------------------------------------------------------------------\r\n\tmRenderTargetTop = C3D_RenderTargetCreate(240, 400, GPU_RB_RGBA8, GPU_RB_DEPTH24_STENCIL8);\r\n\tC3D_RenderTargetClear(mRenderTargetTop, C3D_CLEAR_ALL, CLEAR_COLOR, 0);\r\n\tC3D_RenderTargetSetOutput(mRenderTargetTop, GFX_TOP, GFX_LEFT, DISPLAY_TRANSFER_FLAGS);\r\n\t\r\n\tmRenderTargetBtm = C3D_RenderTargetCreate(240, 320, GPU_RB_RGBA8, GPU_RB_DEPTH24_STENCIL8);\r\n\tC3D_RenderTargetClear(mRenderTargetBtm, C3D_CLEAR_ALL, CLEAR_COLOR, 0);\r\n\tC3D_RenderTargetSetOutput(mRenderTargetBtm, GFX_BOTTOM, GFX_LEFT, DISPLAY_TRANSFER_FLAGS);\r\n\r\n\t//---------------------------------------------------------------------\r\n\t// Setup top screen sprite ( for video render )\r\n\t//---------------------------------------------------------------------\r\n\tmTopScreenSprite = new Sprite();\r\n\r\n\t//---------------------------------------------------------------------\r\n\t// Init memory pool\r\n\t// @Code borrow from libsf2d\r\n\t//---------------------------------------------------------------------\r\n\tmemoryPoolAddr = linearAlloc(0x80000);\r\n\tmemoryPoolSize = 0x80000;\r\n\t//---------------------------------------------------------------------\r\n\t// Load the vertex shader, create a shader program and bind it\r\n\t//---------------------------------------------------------------------\r\n\tmVShaderDVLB = DVLB_ParseFile((u32*)vshader_shbin, vshader_shbin_size);\r\n\tshaderProgramInit(&mShaderProgram);\r\n\tshaderProgramSetVsh(&mShaderProgram, &mVShaderDVLB->DVLE[0]);\r\n\tC3D_BindProgram(&mShaderProgram);\r\n\r\n\tmULocProjection = shaderInstanceGetUniformLocation(mShaderProgram.vertexShader, \"projection\");\r\n\r\n\tMtx_OrthoTilt(&mProjectionTop, 0.0, 400, 240, 0.0, 0.0, 1.0, true);\r\n\tMtx_OrthoTilt(&mProjectionBtm, 0.0, 320, 240, 0.0, 0.0, 1.0, true);\r\n\r\n\tC3D_CullFace(GPU_CULL_NONE);\r\n\tC3D_DepthTest(true, GPU_GEQUAL, GPU_WRITE_ALL);\r\n\t\r\n\r\n\t//---------------------------------------------------------------------\r\n\t// Init top screen sprite\r\n\t//---------------------------------------------------------------------\r\n\tmTopScreenSprite->initialized = true;\r\n\tC3D_TexInit(&mTopScreenSprite->tex, 512, 256, GPU_RGB8);\r\n\tC3D_TexSetFilter(&mTopScreenSprite->tex, GPU_LINEAR, GPU_NEAREST);\r\n\tC3D_TexSetWrap(&mTopScreenSprite->tex, GPU_CLAMP_TO_BORDER, GPU_CLAMP_TO_BORDER);\r\n\t//---------------------------------------------------------------------\r\n\t// System font init\r\n\t//---------------------------------------------------------------------\r\n\tResult res = fontEnsureMapped();\r\n\r\n\tint i;\r\n\tTGLP_s* glyphInfo = fontGetGlyphInfo();\r\n\tmGlyphSheets = malloc(sizeof(C3D_Tex)*glyphInfo->nSheets);\r\n\tfor (i = 0; i < glyphInfo->nSheets; i++)\r\n\t{\r\n\t\tC3D_Tex* tex = &mGlyphSheets[i];\r\n\t\ttex->data = fontGetGlyphSheetTex(i);\r\n\t\ttex->fmt = glyphInfo->sheetFmt;\r\n\t\ttex->size = glyphInfo->sheetSize;\r\n\t\ttex->width = glyphInfo->sheetWidth;\r\n\t\ttex->height = glyphInfo->sheetHeight;\r\n\t\ttex->param = GPU_TEXTURE_MAG_FILTER(GPU_LINEAR) | GPU_TEXTURE_MIN_FILTER(GPU_LINEAR)\r\n\t\t\t| GPU_TEXTURE_WRAP_S(GPU_CLAMP_TO_EDGE) | GPU_TEXTURE_WRAP_T(GPU_CLAMP_TO_EDGE);\r\n\t\ttex->border = 0;\r\n\t\ttex->lodParam = 0;\r\n\t}\r\n}\r\n\r\nvoid PPGraphics::GraphicExit()\r\n{\r\n\t// clean up cache\r\n\tfor(auto it = mTexCached.begin(); it != mTexCached.end(); ++it) {\r\n\t\tif(it->second->initialized) C3D_TexDelete(&it->second->tex);\r\n\t\tdelete it->second;\r\n\t}\r\n\tmTexCached.clear();\r\n\t// delete top\r\n\tif(mTopScreenSprite->initialized)\r\n\t\tC3D_TexDelete(&mTopScreenSprite->tex);\r\n\tdelete mTopScreenSprite;\r\n\tlinearFree(memoryPoolAddr);\r\n\tfree(mGlyphSheets);\r\n\tshaderProgramFree(&mShaderProgram);\r\n\tDVLB_Free(mVShaderDVLB);\r\n\tC3D_Fini();\r\n\tgfxExit();\r\n}\r\n\r\nSprite* PPGraphics::AddCacheImageAsset(const char* name, std::string key)\r\n{\r\n\tauto iter = mTexCached.find(key);\r\n\tif (iter != mTexCached.end()) {\r\n\t\treturn iter->second;\r\n\t}\r\n\r\n\t// get image path\r\n\tchar path[100];\r\n\tsprintf(path, \"romfs:/assets/%s\", name);\r\n\treturn AddCacheImage(path, key);\r\n}\r\n\r\nSprite* PPGraphics::AddCacheImage(const char* path, std::string key)\r\n{\r\n\tauto iter = mTexCached.find(key);\r\n\tif (iter != mTexCached.end()) {\r\n\t\treturn iter->second;\r\n\t}\r\n\r\n\tunsigned char* image;\r\n\tunsigned width, height;\r\n\r\n\tint ret = lodepng_decode32_file(&image, &width, &height, path);\r\n\tif (ret != 0)\r\n\t{\r\n\t\tprintf(\"Failed when decode asset image: %s with key: %s\\n\", path, key.c_str());\r\n\t\treturn nullptr;\r\n\t}\r\n\t\r\n\tu8 *gpusrc = linearAlloc(width*height * 4);\r\n\tu8* src = image; u8 *dst = gpusrc;\r\n\t// lodepng outputs big endian rgba so we need to convert\r\n\tfor (int i = 0; i<width*height; i++) {\r\n\t\tint r = *src++;\r\n\t\tint g = *src++;\r\n\t\tint b = *src++;\r\n\t\tint a = *src++;\r\n\t\t*dst++ = a;\r\n\t\t*dst++ = b;\r\n\t\t*dst++ = g;\r\n\t\t*dst++ = r;\r\n\t}\r\n\r\n\tSprite* sprite = new Sprite();\r\n\t// init texture\r\n\tbool success = C3D_TexInit(&sprite->tex, next_pow2(width), next_pow2(height), GPU_RGBA8);\r\n\tif (!success)\r\n\t{\r\n\t\tfree(sprite);\r\n\t\treturn nullptr;\r\n\t}\r\n\tsprite->width = width;\r\n\tsprite->height = height;\r\n\tC3D_TexSetWrap(&sprite->tex, GPU_CLAMP_TO_BORDER, GPU_CLAMP_TO_BORDER);\r\n\r\n\tprintf(\"Added sprite: %s from path: %s\\n\", key.c_str(), path);\r\n\r\n\tGSPGPU_FlushDataCache(gpusrc, width*height * 4);\r\n\tC3D_SyncDisplayTransfer((u32*)gpusrc, GX_BUFFER_DIM(width, height), (u32*)sprite->tex.data, GX_BUFFER_DIM(sprite->tex.width, sprite->tex.height), TEXTURE_RGBA_TRANSFER_FLAGS);\r\n\r\n\tfree(image);\r\n\tlinearFree(gpusrc);\r\n\r\n\tmTexCached[key] = sprite;\r\n\r\n\treturn sprite;\r\n}\r\n\r\nSprite* PPGraphics::AddCacheImage(u8* buf, u32 size, std::string key)\r\n{\r\n\tauto iter = mTexCached.find(key);\r\n\tif (iter != mTexCached.end()) {\r\n\t\treturn iter->second;\r\n\t}\r\n\t\r\n\t// load data\r\n\tunsigned char* image;\r\n\tunsigned width, height;\r\n\tint ret = lodepng_decode32(&image, &width, &height, buf, size);\r\n\tif(ret != 0)\r\n\t{\r\n\t\tprintf(\"Failed when decode png image with key: %s\\n\", key.c_str());\r\n\t\treturn nullptr;\r\n\t}\r\n\t\r\n\tu8 *gpusrc = linearAlloc(width*height * 4);\r\n\tu8* src = image; u8 *dst = gpusrc;\r\n\t// lodepng outputs big endian rgba so we need to convert\r\n\tfor (int i = 0; i<width*height; i++) {\r\n\t\tint r = *src++;\r\n\t\tint g = *src++;\r\n\t\tint b = *src++;\r\n\t\tint a = *src++;\r\n\t\t*dst++ = a;\r\n\t\t*dst++ = b;\r\n\t\t*dst++ = g;\r\n\t\t*dst++ = r;\r\n\t}\r\n\r\n\tSprite* sprite = new Sprite();\r\n\t// init texture\r\n\tbool success = C3D_TexInit(&sprite->tex, next_pow2(width), next_pow2(height), GPU_RGBA8);\r\n\tif(!success)\r\n\t{\r\n\t\tdelete sprite;\r\n\t\treturn nullptr;\r\n\t}\r\n\tsprite->width = width;\r\n\tsprite->height = height;\r\n\tC3D_TexSetWrap(&sprite->tex, GPU_CLAMP_TO_BORDER, GPU_CLAMP_TO_BORDER);\r\n\r\n\tprintf(\"added key: %s - size: %d\\n\", key.c_str(), size);\r\n\r\n\tGSPGPU_FlushDataCache(gpusrc, width*height * 4);\r\n\tC3D_SyncDisplayTransfer((u32*)gpusrc, GX_BUFFER_DIM(width, height), (u32*)sprite->tex.data, GX_BUFFER_DIM(sprite->tex.width, sprite->tex.height), TEXTURE_RGBA_TRANSFER_FLAGS);\r\n\r\n\tfree(image);\r\n\tlinearFree(gpusrc);\r\n\r\n\tmTexCached[key] = sprite;\r\n\r\n\treturn sprite;\r\n}\r\n\r\nSprite* PPGraphics::GetCacheImage(std::string key)\r\n{\r\n\tauto iter = mTexCached.find(key);\r\n\tif (iter != mTexCached.end()) {\r\n\t\treturn iter->second;\r\n\t}\r\n\tprintf(\"Image cache not found: %s\\n\", key.c_str());\r\n\treturn nullptr;\r\n}\r\n\r\nPPGraphics::~PPGraphics()\r\n{\r\n\r\n}\r\n\r\nvoid PPGraphics::BeginRender()\r\n{\r\n\tresetMemoryPool();\r\n\tC3D_FrameBegin(C3D_FRAME_SYNCDRAW);\r\n}\r\n\r\nvoid PPGraphics::RenderOn(gfxScreen_t screen)\r\n{\r\n\tmCurrentDrawScreen = screen;\r\n\tif (screen == GFX_TOP)\r\n\t{\r\n\t\tC3D_FrameDrawOn(mRenderTargetTop);\r\n\t\t// set uniform projection\r\n\t\tC3D_FVUnifMtx4x4(GPU_VERTEX_SHADER, mULocProjection, &mProjectionTop);\r\n\t\t// draw a transparent and small dot to avoid soft lock\r\n\t\t// @ref : https://github.com/fincs/citro3d/issues/35\r\n\t\tDrawRectangle(0, 0, 1, 1, Color{ 0,0,0,255 });\r\n\t}\r\n\telse {\r\n\t\tC3D_FrameDrawOn(mRenderTargetBtm);\r\n\t\t// set uniform projection\r\n\t\tC3D_FVUnifMtx4x4(GPU_VERTEX_SHADER, mULocProjection, &mProjectionBtm);\r\n\t\t// draw a transparent and small dot to avoid soft lock\r\n\t\t// @ref : https://github.com/fincs/citro3d/issues/35\r\n\t\tDrawRectangle(0, 0, 1, 1, Color{ 0,0,0,255 });\r\n\t}\r\n}\r\n\r\nvoid PPGraphics::EndRender()\r\n{\r\n\tC3D_FrameEnd(0);\r\n}\r\n\r\nvoid PPGraphics::UpdateTopScreenSprite(u8* data, u32 size)\r\n{\r\n\tif (!mTopScreenSprite->initialized) return;\r\n\tmemcpy(mTopScreenSprite->tex.data, data, size);\r\n\tGSPGPU_FlushDataCache(mTopScreenSprite->tex.data, size);\r\n}\r\n\r\nvoid PPGraphics::DrawTopScreenSprite()\r\n{\r\n\tif (!mTopScreenSprite->initialized) return;\r\n\r\n\tfloat x = 0, y = 0, w = 400.0f, h = 240.0f;\r\n\tfloat ow = 512.0f, oh = 256.0f;\r\n\tfloat u = 1;\r\n\tfloat v = 1;\r\n\r\n\tVertexPosTex *vertices = (VertexPosTex*)allocMemoryPoolAligned(sizeof(VertexPosTex) * 4, 8);\r\n\t\r\n\t// set position\r\n\tvertices[0].position = (Vector3) { 0, 0, 0.5f };\r\n\tvertices[1].position = (Vector3) { 512.0f, 0, 0.5f };\r\n\tvertices[2].position = (Vector3) { 0, 256.0f, 0.5f };\r\n\tvertices[3].position = (Vector3) { 512.0f, 256.0f, 0.5f };\r\n\tvertices[0].textcoord = (Vector2) { 0.0f, 1.0f };\r\n\tvertices[1].textcoord = (Vector2) { 1.0f, 1.0f};\r\n\tvertices[2].textcoord = (Vector2) { 0.0f, 0.0f };\r\n\tvertices[3].textcoord = (Vector2) { 1.0f, 0.0f};\r\n\t\r\n\t// setup env\r\n\tC3D_TexBind(getTextUnit(GPU_TEXUNIT0), &mTopScreenSprite->tex);\r\n\tC3D_TexEnv* env = C3D_GetTexEnv(0);\r\n\tC3D_TexEnvSrc(env, C3D_Both, GPU_TEXTURE0, 0, 0);\r\n\tC3D_TexEnvOp(env, C3D_Both, 0, 0, 0);\r\n\tC3D_TexEnvFunc(env, C3D_Both, GPU_REPLACE);\r\n\r\n\tC3D_AttrInfo* attrInfo = C3D_GetAttrInfo();\r\n\tAttrInfo_Init(attrInfo);\r\n\tAttrInfo_AddLoader(attrInfo, 0, GPU_FLOAT, 3);\r\n\tAttrInfo_AddLoader(attrInfo, 1, GPU_FLOAT, 2);\r\n\r\n\tC3D_BufInfo* bufInfo = C3D_GetBufInfo();\r\n\tBufInfo_Init(bufInfo);\r\n\tBufInfo_Add(bufInfo, vertices, sizeof(VertexPosTex), 2, 0x10);\r\n\r\n\tC3D_DrawArrays(GPU_TRIANGLE_STRIP, 0, 4);\r\n}\r\n\r\nvoid PPGraphics::setupForPosCollEnv(void* vertices)\r\n{\r\n\tC3D_TexEnv* env = C3D_GetTexEnv(0);\r\n\tC3D_TexEnvSrc(env, C3D_Both, GPU_PRIMARY_COLOR, 0, 0);\r\n\tC3D_TexEnvOp(env, C3D_Both, 0, 0, 0);\r\n\tC3D_TexEnvFunc(env, C3D_Both, GPU_REPLACE);\r\n\r\n\tC3D_AttrInfo* attrInfo = C3D_GetAttrInfo();\r\n\tAttrInfo_Init(attrInfo);\r\n\tAttrInfo_AddLoader(attrInfo, 0, GPU_FLOAT, 3);\r\n\tAttrInfo_AddLoader(attrInfo, 1, GPU_UNSIGNED_BYTE, 4);\r\n\r\n\tC3D_BufInfo* bufInfo = C3D_GetBufInfo();\r\n\tBufInfo_Init(bufInfo);\r\n\tBufInfo_Add(bufInfo, vertices, sizeof(VertexPosCol), 2, 0x10);\r\n}\r\n\r\nvoid PPGraphics::setupForPosTexlEnv(void* vertices, u32 color, int texID)\r\n{\r\n\tC3D_TexEnv* env = C3D_GetTexEnv(0);\r\n\tC3D_TexEnvSrc(env, C3D_RGB, GPU_CONSTANT, 0, 0);\r\n\tC3D_TexEnvSrc(env, C3D_Alpha, GPU_TEXTURE0, GPU_CONSTANT, 0);\r\n\tC3D_TexEnvOp(env, C3D_Both, 0, 0, 0);\r\n\tC3D_TexEnvFunc(env, C3D_RGB, GPU_REPLACE);\r\n\tC3D_TexEnvFunc(env, C3D_Alpha, GPU_MODULATE);\r\n\tC3D_TexEnvColor(env, color);\r\n\r\n\tC3D_AttrInfo* attrInfo = C3D_GetAttrInfo();\r\n\tAttrInfo_Init(attrInfo);\r\n\tAttrInfo_AddLoader(attrInfo, 0, GPU_FLOAT, 3);\r\n\tAttrInfo_AddLoader(attrInfo, 1, GPU_FLOAT, 2);\r\n\r\n\tC3D_BufInfo* bufInfo = C3D_GetBufInfo();\r\n\tBufInfo_Init(bufInfo);\r\n\tBufInfo_Add(bufInfo, vertices, sizeof(VertexPosTex), 2, 0x10);\r\n}\r\n\r\nint PPGraphics::getTextUnit(GPU_TEXUNIT unit)\r\n{\r\n\tswitch (unit) {\r\n\tcase GPU_TEXUNIT0: return 0;\r\n\tcase GPU_TEXUNIT1: return 1;\r\n\tcase GPU_TEXUNIT2: return 2;\r\n\tdefault:           return -1;\r\n\t}\r\n}\r\n\r\nvoid* PPGraphics::allocMemoryPoolAligned(u32 size, u32 alignment)\r\n{\r\n\tu32 new_index = (memoryPoolIndex + alignment - 1) & ~(alignment - 1);\r\n\tif ((new_index + size) < memoryPoolSize) {\r\n\t\tvoid *addr = (void *)((u32)memoryPoolAddr + new_index);\r\n\t\tmemoryPoolIndex = new_index + size;\r\n\t\treturn addr;\r\n\t}\r\n\treturn NULL;\r\n}\r\n\r\nvoid PPGraphics::DrawImage(Sprite* sprite, int x, int y)\r\n{\r\n\tDrawImage(sprite, x, y, sprite->width, sprite->height);\r\n}\r\n\r\nvoid PPGraphics::DrawImage(Sprite* sprite, int x, int y, int w, int h)\r\n{\r\n\tDrawImage(sprite, x, y, w, h, 0);\r\n}\r\n\r\nvoid PPGraphics::DrawImage(Sprite* sprite, int x, int y, int w, int h, int degrees)\r\n{\r\n\tDrawImage(sprite, x, y, w, h, degrees, Vector2{ 0.5f, 0.5f });\r\n}\r\n\r\nvoid PPGraphics::DrawImage(Sprite* sprite, int x, int y, int w, int h, int degrees, Vector2 anchor)\r\n{\r\n\tif (!sprite) return;\r\n\tC3D_TexFlush(&sprite->tex);\r\n\r\n\t// bind texture\r\n\tC3D_TexBind(getTextUnit(GPU_TEXUNIT0), &sprite->tex);\r\n\tC3D_TexEnv* env = C3D_GetTexEnv(0);\r\n\tC3D_TexEnvSrc(env, C3D_Both, GPU_TEXTURE0, 0, 0);\r\n\tC3D_TexEnvOp(env, C3D_Both, 0, 0, 0);\r\n\tC3D_TexEnvFunc(env, C3D_Both, GPU_REPLACE);\r\n\r\n\r\n\tVertexPosTex *vertices = (VertexPosTex*)allocMemoryPoolAligned(sizeof(VertexPosTex) * 4, 8);\r\n\tif (!vertices) return;\r\n\r\n\t// set position\r\n\tvertices[0].position = (Vector3) { x, y, 0.5f };\r\n\tvertices[1].position = (Vector3) { w + x, y, 0.5f };\r\n\tvertices[2].position = (Vector3) { x, h + y, 0.5f };\r\n\tvertices[3].position = (Vector3) { w + x, h + y, 0.5f };\r\n\r\n\tfloat u = sprite->width / (float)sprite->tex.width;\r\n\tfloat v = sprite->width / (float)sprite->tex.height;\r\n\r\n\t// set uv\r\n\tvertices[0].textcoord = (Vector2) { 0.0f, 0.0f };\r\n\tvertices[1].textcoord = (Vector2) { u, 0.0f };\r\n\tvertices[2].textcoord = (Vector2) { 0.0f, v };\r\n\tvertices[3].textcoord = (Vector2) { u, v };\r\n\r\n\t// rotate if need\r\n\tif (degrees != 0)\r\n\t{\r\n\t\tfloat rad = (float)degrees * M_PI / 180.f;\r\n\t\tconst float c = cosf(rad);\r\n\t\tconst float s = sinf(rad);\r\n\t\tfloat cx = x + w * anchor.x;\r\n\t\tfloat cy = y + h * anchor.y;\r\n\t\tint i;\r\n\t\tfor (i = 0; i < 4; ++i) { // Rotate and translate\r\n\t\t\tfloat _x = cx - vertices[i].position.x;\r\n\t\t\tfloat _y = cy - vertices[i].position.y;\r\n\t\t\tvertices[i].position.x = _x*c - _y*s + cx;\r\n\t\t\tvertices[i].position.y = _x*s + _y*c + cy;\r\n\t\t}\r\n\t}\r\n\r\n\t// setup attributes\r\n\tC3D_AttrInfo* attrInfo = C3D_GetAttrInfo();\r\n\tAttrInfo_Init(attrInfo);\r\n\tAttrInfo_AddLoader(attrInfo, 0, GPU_FLOAT, 3);\r\n\tAttrInfo_AddLoader(attrInfo, 1, GPU_FLOAT, 2);\r\n\r\n\tC3D_BufInfo* bufInfo = C3D_GetBufInfo();\r\n\tBufInfo_Init(bufInfo);\r\n\tBufInfo_Add(bufInfo, vertices, sizeof(VertexPosTex), 2, 0x10);\r\n\r\n\tC3D_DrawArrays(GPU_TRIANGLE_STRIP, 0, 4);\r\n}\r\n\r\nvoid PPGraphics::DrawRectangle(float x, float y, float w, float h, Color color, float rounding)\r\n{\r\n\tu32 col = color.toU32();\r\n\r\n\tif(rounding > 0.0f)\r\n\t{\r\n\t\tVertexPosCol* vertices = (VertexPosCol*)allocMemoryPoolAligned(sizeof(VertexPosCol) * 66, 8);\r\n\t\tif (!vertices) return;\r\n\t\tint vIndex = -1;\r\n\t\tVector3 outters[8];\r\n\t\tint outterIdx = -1;\r\n\r\n\t\tVector3 tl = Vector3{ x + rounding, y + rounding, 0.5f };\r\n\t\tVector3 tr = Vector3{ x + w - rounding, y + rounding, 0.5f };\r\n\t\tVector3 bl = Vector3{ x + rounding, y + h - rounding, 0.5f };\r\n\t\tVector3 br = Vector3{ x + w - rounding, y + h - rounding, 0.5f };\r\n\r\n\t\t// main rect\r\n\t\tvertices[++vIndex] = VertexPosCol { tl , col };\t\t\t// top-left\r\n\t\tvertices[++vIndex] = VertexPosCol { tr , col };\t\t\t// top-right\r\n\t\tvertices[++vIndex] = VertexPosCol { bl , col };\t\t\t// bottom-left\r\n\r\n\t\tvertices[++vIndex] = VertexPosCol { tr , col };\t\t\t// top-right\r\n\t\tvertices[++vIndex] = VertexPosCol { bl , col };\t\t\t// bottom-left\r\n\t\tvertices[++vIndex] = VertexPosCol { br , col };\t\t\t// bottom-right\r\n\r\n\t\t// arc path func\r\n\t\t// tl : 6, 9     tr : 9, 12     br : 0, 3    bl : 3, 6\r\n\t\tconst std::function<void(Vector3 center, int min, int max)> arcVertices = [&](Vector3 center, int min, int max)\r\n\t\t{\r\n\t\t\tVector3 result[4];\r\n\t\t\tint i = -1;\r\n\t\t\tfor (int a = min; a <= max; ++a)\r\n\t\t\t{\r\n\t\t\t\tconst Vector2& v = mCircleVertex12[a % IM_ARRAYSIZE(mCircleVertex12)];\r\n\t\t\t\tresult[++i] = Vector3{ center.x + v.x * rounding, center.y + v.y * rounding, 0.5f };\r\n\t\t\t}\r\n\t\t\t// add vertices\r\n\t\t\tvertices[++vIndex] = VertexPosCol{ center , col };\r\n\t\t\tvertices[++vIndex] = VertexPosCol{ result[0] , col };\r\n\t\t\tvertices[++vIndex] = VertexPosCol{ result[1] , col };\r\n\r\n\t\t\tvertices[++vIndex] = VertexPosCol{ center , col };\r\n\t\t\tvertices[++vIndex] = VertexPosCol{ result[1] , col };\r\n\t\t\tvertices[++vIndex] = VertexPosCol{ result[2] , col };\r\n\r\n\t\t\tvertices[++vIndex] = VertexPosCol{ center , col };\r\n\t\t\tvertices[++vIndex] = VertexPosCol{ result[2] , col };\r\n\t\t\tvertices[++vIndex] = VertexPosCol{ result[3] , col };\r\n\r\n\t\t\toutters[++outterIdx] = result[0];\r\n\t\t\toutters[++outterIdx] = result[3];\r\n\t\t};\r\n\r\n\t\tarcVertices(tl, 6, 9);\r\n\t\tarcVertices(tr, 9, 12);\r\n\t\tarcVertices(br, 0, 3);\r\n\t\tarcVertices(bl, 3, 6);\r\n\r\n\t\t// outer rect\r\n\t\tvertices[++vIndex] = VertexPosCol{ tl , col };\t\t\t\r\n\t\tvertices[++vIndex] = VertexPosCol{ tr , col };\t\t\t\r\n\t\tvertices[++vIndex] = VertexPosCol{ outters[1] , col };\t\t\r\n\t\tvertices[++vIndex] = VertexPosCol{ tr , col };\t\t\r\n\t\tvertices[++vIndex] = VertexPosCol{ outters[1] , col };\t\t\t\r\n\t\tvertices[++vIndex] = VertexPosCol{ outters[2] , col };\t\t\r\n\r\n\t\tvertices[++vIndex] = VertexPosCol{ tr , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ br , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ outters[3] , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ outters[3] , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ br , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ outters[4] , col };\r\n\r\n\t\tvertices[++vIndex] = VertexPosCol{ bl , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ br , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ outters[5] , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ bl , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ outters[5] , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ outters[6] , col };\r\n\r\n\t\tvertices[++vIndex] = VertexPosCol{ tl , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ bl , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ outters[0] , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ bl , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ outters[0] , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ outters[7] , col };\r\n\r\n\t\tsetupForPosCollEnv(vertices);\r\n\r\n\t\tC3D_DrawArrays(GPU_TRIANGLES, 0, 66);\r\n\t}else\r\n\t{\r\n\t\tVertexPosCol* vertices = (VertexPosCol*)allocMemoryPoolAligned(sizeof(VertexPosCol) * 4, 8);\r\n\t\tif (!vertices) return;\r\n\t\tu32 vIndex = 0;\r\n\r\n\t\tvertices[vIndex] = VertexPosCol{ Vector3{ x, y, 0.5f } , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ Vector3{ x + w, y, 0.5f } , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ Vector3{ x, y + h, 0.5f } , col };\r\n\t\tvertices[++vIndex] = VertexPosCol{ Vector3{ x + w, y + h, 0.5f } , col };\r\n\r\n\t\tsetupForPosCollEnv(vertices);\r\n\r\n\t\tC3D_DrawArrays(GPU_TRIANGLE_STRIP, 0, 4);\r\n\t}\r\n}\r\n\r\nvoid PPGraphics::StartMasked(float x, float y, float w, float h, gfxScreen_t screen) const\r\n{\r\n\tif(screen == GFX_TOP) C3D_SetScissor(GPU_SCISSOR_NORMAL, 240 - (y + h), 400 - (x + w), 240 - y, 400 - x);\r\n\telse C3D_SetScissor(GPU_SCISSOR_NORMAL, 240 - (y + h), 320 - (x + w), 240 - y, 320 - x);\r\n\t\r\n}\r\n\r\nvoid PPGraphics::StopMasked() const\r\n{\r\n\tC3D_SetScissor(GPU_SCISSOR_DISABLE, 0, 0, 0, 0);\r\n}\r\n\r\nvoid PPGraphics::DrawText(const char* text, float x, float y, float scaleX, float scaleY, Color color, bool baseline)\r\n{\r\n\tssize_t  units;\r\n\tu32 code;\r\n\r\n\tconst u8* p = (const u8*)text;\r\n\tfloat firstX = x;\r\n\tu32 flags = GLYPH_POS_CALC_VTXCOORD | (baseline ? GLYPH_POS_AT_BASELINE : 0);\r\n\tint lastSheet = -1;\r\n\tdo\r\n\t{\r\n\t\tif(!*p) break;\r\n\t\tunits = decode_utf8(&code, p);\r\n\t\tif(units == -1)\r\n\t\t\tbreak;\r\n\r\n\t\tp += units;\r\n\t\tif( code == '\\n')\r\n\t\t{\r\n\t\t\tx = firstX;\r\n\t\t\ty += scaleY * fontGetInfo()->lineFeed;\r\n\t\t}else if(code > 0)\r\n\t\t{\r\n\t\t\tint glyphIdx = fontGlyphIndexFromCodePoint(code);\r\n\t\t\tfontGlyphPos_s data;\r\n\t\t\tfontCalcGlyphPos(&data, glyphIdx, flags, scaleX, scaleY);\r\n\r\n\t\t\t// Bind the correct texture sheet\r\n\t\t\tif (data.sheetIndex != lastSheet)\r\n\t\t\t{\r\n\t\t\t\tlastSheet = data.sheetIndex;\r\n\t\t\t\tC3D_TexBind(getTextUnit(GPU_TEXUNIT0), &mGlyphSheets[lastSheet]);\r\n\t\t\t}\r\n\r\n\t\t\tVertexPosTex* vertices = (VertexPosTex*)allocMemoryPoolAligned(sizeof(VertexPosTex) * 4, 8);\r\n\t\t\tif (!vertices) \r\n\t\t\t\tbreak; // out of memory in pool\r\n\r\n\t\t\t// set position\r\n\t\t\tvertices[0].position = (Vector3) { x + data.vtxcoord.left, y + data.vtxcoord.bottom, 0.5f };\r\n\t\t\tvertices[1].position = (Vector3) { x + data.vtxcoord.right, y + data.vtxcoord.bottom, 0.5f };\r\n\t\t\tvertices[2].position = (Vector3) { x + data.vtxcoord.left, y + data.vtxcoord.top, 0.5f };\r\n\t\t\tvertices[3].position = (Vector3) { x + data.vtxcoord.right, y + data.vtxcoord.top, 0.5f };\r\n\r\n\t\t\t// set uv\r\n\t\t\tvertices[0].textcoord = (Vector2) { data.texcoord.left , data.texcoord.bottom };\r\n\t\t\tvertices[1].textcoord = (Vector2) { data.texcoord.right, data.texcoord.bottom };\r\n\t\t\tvertices[2].textcoord = (Vector2) { data.texcoord.left, data.texcoord.top };\r\n\t\t\tvertices[3].textcoord = (Vector2) { data.texcoord.right, data.texcoord.top };\r\n\r\n\t\t\tsetupForPosTexlEnv(vertices, color.toU32(), TEX_FONT_GRAPHIC);\r\n\r\n\t\t\tC3D_DrawArrays(GPU_TRIANGLE_STRIP, 0, 4);\r\n\r\n\t\t\tx += data.xAdvance;\r\n\t\t}\r\n\r\n\t} while (code > 0);\r\n}\r\n\r\nvoid PPGraphics::DrawTextAutoWrap(const char* text, float x, float y, float w, float scaleX, float scaleY,\r\n\tColor color, bool baseline)\r\n{\r\n\tssize_t  units;\r\n\tu32 code;\r\n\r\n\tconst u8* p = (const u8*)text;\r\n\tfloat firstX = x;\r\n\tu32 flags = GLYPH_POS_CALC_VTXCOORD | (baseline ? GLYPH_POS_AT_BASELINE : 0);\r\n\tint lastSheet = -1;\r\n\tdo\r\n\t{\r\n\t\tif (!*p) break;\r\n\t\tunits = decode_utf8(&code, p);\r\n\t\tif (units == -1)\r\n\t\t\tbreak;\r\n\r\n\t\tp += units;\r\n\r\n\t\tif (code == '\\n')\r\n\t\t{\r\n\t\t\tx = firstX;\r\n\t\t\ty += scaleY * fontGetInfo()->lineFeed;\r\n\t\t}\r\n\t\telse if (code > 0)\r\n\t\t{\r\n\t\t\tint glyphIdx = fontGlyphIndexFromCodePoint(code);\r\n\t\t\tfontGlyphPos_s data;\r\n\t\t\tfontCalcGlyphPos(&data, glyphIdx, flags, scaleX, scaleY);\r\n\t\t\t\r\n\t\t\t// Bind the correct texture sheet\r\n\t\t\tif (data.sheetIndex != lastSheet)\r\n\t\t\t{\r\n\t\t\t\tlastSheet = data.sheetIndex;\r\n\t\t\t\tC3D_TexBind(getTextUnit(GPU_TEXUNIT0), &mGlyphSheets[lastSheet]);\r\n\t\t\t}\r\n\r\n\t\t\tVertexPosTex* vertices = (VertexPosTex*)allocMemoryPoolAligned(sizeof(VertexPosTex) * 4, 8);\r\n\t\t\tif (!vertices)\r\n\t\t\t\tbreak; // out of memory in pool\r\n\r\n\t\t\t\t\t   // set position\r\n\t\t\tvertices[0].position = (Vector3) { x + data.vtxcoord.left, y + data.vtxcoord.bottom, 0.5f };\r\n\t\t\tvertices[1].position = (Vector3) { x + data.vtxcoord.right, y + data.vtxcoord.bottom, 0.5f };\r\n\t\t\tvertices[2].position = (Vector3) { x + data.vtxcoord.left, y + data.vtxcoord.top, 0.5f };\r\n\t\t\tvertices[3].position = (Vector3) { x + data.vtxcoord.right, y + data.vtxcoord.top, 0.5f };\r\n\r\n\t\t\t// set uv\r\n\t\t\tvertices[0].textcoord = (Vector2) { data.texcoord.left, data.texcoord.bottom };\r\n\t\t\tvertices[1].textcoord = (Vector2) { data.texcoord.right, data.texcoord.bottom };\r\n\t\t\tvertices[2].textcoord = (Vector2) { data.texcoord.left, data.texcoord.top };\r\n\t\t\tvertices[3].textcoord = (Vector2) { data.texcoord.right, data.texcoord.top };\r\n\r\n\t\t\tsetupForPosTexlEnv(vertices, color.toU32(), TEX_FONT_GRAPHIC);\r\n\r\n\t\t\tC3D_DrawArrays(GPU_TRIANGLE_STRIP, 0, 4);\r\n\r\n\t\t\t// this line is seem to over\r\n\t\t\tif (x + data.width > w)\r\n\t\t\t{\r\n\t\t\t\tx = firstX;\r\n\t\t\t\ty += scaleY * fontGetInfo()->lineFeed;\r\n\t\t\t}\r\n\t\t\telse\r\n\t\t\t{\r\n\t\t\t\tx += data.xAdvance;\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t} while (code > 0);\r\n}\r\n\r\n\r\nVector2 PPGraphics::GetTextSize(const char* text, float scaleX, float scaleY)\r\n{\r\n\tssize_t  units;\r\n\tu32 code;\r\n\r\n\tVector2 result;\r\n\tfloat maxW = 0, cW = 0;\r\n\r\n\tconst u8* p = (const u8*)text;\r\n\tu32 flags = GLYPH_POS_CALC_VTXCOORD | 0;\r\n\tresult.y = scaleY * fontGetInfo()->lineFeed;\r\n\tdo\r\n\t{\r\n\t\tif (!*p) break;\r\n\t\tunits = decode_utf8(&code, p);\r\n\t\tif (units == -1)\r\n\t\t\tbreak;\r\n\t\tp += units;\r\n\t\tif (code == '\\n')\r\n\t\t{\r\n\t\t\tif(cW < maxW) cW = maxW;\r\n\t\t\tmaxW = 0;\r\n\t\t\tresult.y += scaleY * fontGetInfo()->lineFeed;\r\n\t\t}else if (code > 0)\r\n\t\t{\r\n\t\t\tint glyphIdx = fontGlyphIndexFromCodePoint(code);\r\n\t\t\tfontGlyphPos_s data;\r\n\t\t\tfontCalcGlyphPos(&data, glyphIdx, flags, scaleX, scaleY);\r\n\t\t\tmaxW += data.xAdvance;\r\n\t\t}\r\n\r\n\t} while (code > 0);\r\n\tif (cW < maxW) cW = maxW;\r\n\tresult.x = cW;\r\n\r\n\treturn result;\r\n}\r\n\r\nVector3 PPGraphics::GetTextSizeAutoWrap(const char* text, float scaleX, float scaleY, float w)\r\n{\r\n\tssize_t  units;\r\n\tu32 code;\r\n\r\n\tVector3 result;\r\n\tfloat maxW = 0;\r\n\tfloat padding = 2;\r\n\r\n\tconst u8* p = (const u8*)text;\r\n\tu32 flags = GLYPH_POS_CALC_VTXCOORD | 0;\r\n\tresult.x = 0;\r\n\tresult.z = 1; // lines\r\n\tresult.y = scaleY * fontGetInfo()->lineFeed + padding; // height\r\n\tdo\r\n\t{\r\n\t\tif (!*p) break;\r\n\t\tunits = decode_utf8(&code, p);\r\n\t\tif (units == -1)\r\n\t\t\tbreak;\r\n\t\tp += units;\r\n\t\tif (code == '\\n')\r\n\t\t{\r\n\t\t\tif (result.x < maxW) result.x = maxW;\r\n\t\t\tmaxW = 0;\r\n\t\t\tresult.y += scaleY * fontGetInfo()->lineFeed + padding;\r\n\t\t\tresult.z += 1;\r\n\t\t}\r\n\t\telse if (code > 0)\r\n\t\t{\r\n\t\t\tint glyphIdx = fontGlyphIndexFromCodePoint(code);\r\n\t\t\tfontGlyphPos_s data;\r\n\t\t\tfontCalcGlyphPos(&data, glyphIdx, flags, scaleX, scaleY);\r\n\t\t\tif (maxW + data.width > w)\r\n\t\t\t{\r\n\t\t\t\tif (result.x < maxW) result.x = maxW;\r\n\t\t\t\tmaxW = 0;\r\n\t\t\t\tresult.y += scaleY * fontGetInfo()->lineFeed + padding;\r\n\t\t\t\tresult.z += 1;\r\n\t\t\t}\r\n\t\t\telse maxW += data.xAdvance;\r\n\t\t}\r\n\r\n\t} while (code > 0);\r\n\tif (result.x < maxW) result.x = maxW;\r\n\r\n\treturn result;\r\n}\r\n"
  },
  {
    "path": "PinBox/PinBox/source/PPMessage.cpp",
    "content": "#include \"PPMessage.h\"\r\n\r\nPPMessage::~PPMessage()\r\n{\r\n\tif (g_content != nullptr) \r\n\t\tfree(g_content);\r\n}\r\n\r\nu8* PPMessage::BuildMessage(u8* contentBuffer, u32 contentSize)\r\n{\r\n\tg_contentSize = contentSize;\r\n\t//-----------------------------------------------\r\n\t// alloc msg buffer block\r\n\tu8* msgBuffer = (u8*)malloc(sizeof(u8) * (contentSize + 9));\r\n\t//-----------------------------------------------\r\n\t// build header\r\n\tu8* pointer = msgBuffer;\r\n\t// 1, validate code\r\n\tWRITE_CHAR_PTR(pointer, g_validateCode, 4);\r\n\t// 2, message code\r\n\tWRITE_U8(pointer, g_code);\r\n\t// 3, content size\r\n\tWRITE_U32(pointer, g_contentSize);\r\n\t//-----------------------------------------------\r\n\t// build content data\r\n\tif (g_contentSize > 0) {\r\n\t\tmemcpy(msgBuffer + 9, contentBuffer, contentSize);\r\n\t}\r\n\t//-----------------------------------------------\r\n\treturn msgBuffer;\r\n}\r\n\r\n\r\nu8* PPMessage::BuildMessageEmpty()\r\n{\r\n\treturn BuildMessage(nullptr, 0);\r\n}\r\n\r\nvoid PPMessage::BuildMessageHeader(u8 code)\r\n{\r\n\tg_code = code;\r\n}\r\n\r\n\r\nbool PPMessage::ParseHeader(u8* buffer)\r\n{\r\n\tif (IS_INVALID_CODE(buffer, 0))\r\n\t{\r\n\t\tprintf(\"Parse header failed. Validate code is incorrect: %c%c%c%c \\n\", buffer[0], buffer[1], buffer[2], buffer[3]);\r\n\t\treturn false;\r\n\t}\r\n\t//-----------------------------------------------------------\r\n\tsize_t readIndex = 4;\r\n\tg_code = READ_U8(buffer, readIndex); readIndex += 1;\r\n\tg_contentSize = READ_U32(buffer, readIndex); readIndex += 4;\r\n\treturn true;\r\n}\r\n\r\nvoid PPMessage::ClearHeader()\r\n{\r\n\tg_code = 0;\r\n\tg_contentSize = 0;\r\n}\r\n"
  },
  {
    "path": "PinBox/PinBox/source/PPSession.cpp",
    "content": "#include \"PPSession.h\"\r\n#include \"PPGraphics.h\"\r\n#include \"PPSessionManager.h\"\r\n#include \"ConfigManager.h\"\r\n\r\n#define ONE_MILLISECOND 1000000ULL\r\n#define ONE_MICROSECOND 1000ULL\r\n#define BUFFERSIZE 0x1000\r\n#define BUFFER_POOL_SIZE (BUFFERSIZE * 12)\r\n// static buffer to store socket data\r\nstatic u8*\t\t\t\t\t\tg_receivedBuffer;\r\nstatic u64\t\t\t\t\t\tg_receivedSize;\r\nstatic u64\t\t\t\t\t\tg_waitForSize;\r\nstatic u32\t\t\t\t\t\tg_msgTag;\r\n\r\nnamespace { // helpers\r\n\tvoid createNew(void* arg) {\r\n\t\tstatic_cast<PPSession*>(arg)->threadMain();\r\n\t}\r\n\tvoid createTest(void* arg) {\r\n\t\tstatic_cast<PPSession*>(arg)->threadTest();\r\n\t}\r\n}\r\n\r\nPPSession::PPSession()\r\n{\r\n\t// init static buffer\r\n\tg_receivedBuffer = (u8*)malloc(BUFFER_POOL_SIZE);\r\n\tg_receivedSize = 0;\r\n\tg_waitForSize = 0;\r\n\tg_msgTag = 0;\r\n}\r\n\r\nPPSession::~PPSession()\r\n{\r\n\tReleaseSession();\r\n\r\n\tCleanUp();\r\n\t// free static buffer\r\n\tfree(g_receivedBuffer);\r\n\tfree(_ip);\r\n\tfree(_port);\r\n}\r\n\r\nvoid PPSession::InitTestSession(PPSessionManager* manager, const char* ip, const char* port)\r\n{\r\n\t_testConnectionResult = 0;\r\n\t_manager = manager;\r\n\t_ip = strdup(ip);\r\n\t_port = strdup(port);\r\n\ts32 priority = 0;\r\n\tsvcGetThreadPriority(&priority, CUR_THREAD_HANDLE);\r\n\ts32 t = priority - 2;\r\n\tif (t < 0x19) t = 0x19;\r\n\t// detached thread right after it created\r\n\t_thread = threadCreate(createTest, static_cast<void*>(this), 4 * 1024, t, -2, false);\r\n}\r\n\r\n\r\nvoid PPSession::InitSession(PPSessionManager* manager, const char* ip, const char* port)\r\n{\r\n\t_sendingMessages = std::queue<QueueMessage*>();\r\n\t_queueMessageMutex = new Mutex();\r\n\t//-------------------------------------------------------------\r\n\t_manager = manager;\r\n\t_ip = strdup(ip);\r\n\t_port = strdup(port);\r\n\t//-------------------------------------------------------------\r\n\t_authenticated = false;\r\n\ts32 priority = 0;\r\n\tsvcGetThreadPriority(&priority, CUR_THREAD_HANDLE);\r\n\ts32 t = priority - 2;\r\n\tif (t < 0x19) t = 0x19;\r\n\t_thread = threadCreate(createNew, static_cast<void*>(this), 4 * 1024, t, -2, false);\r\n}\r\n\r\n\r\nvoid PPSession::ReleaseSession()\r\n{\r\n\tif (!_running || _kill) return;\r\n\t_kill = true;\r\n\t_running = false;\r\n\t// join thread\r\n\tthreadJoin(_thread, U64_MAX);\r\n\tthreadFree(_thread);\r\n\t_thread = NULL;\r\n}\r\n\r\nvoid PPSession::CleanUp()\r\n{\r\n\t// clean up\r\n\t\r\n\tg_receivedSize = 0;\r\n\tg_waitForSize = 0;\r\n\tg_msgTag = 0;\r\n\t_authenticated = false;\r\n\tif (_tmpMessage)\r\n\t{\r\n\t\tdelete _tmpMessage;\r\n\t}\r\n\r\n\t// cleanup hub\r\n\tfor (HubItem* item : _hubItems)\r\n\t{\r\n\t\tif (item->thumbSize > 0) free(item->thumbBuf);\r\n\t\tdelete item;\r\n\t}\r\n\t_hubItems.clear();\r\n}\r\n\r\nvoid PPSession::StartStream()\r\n{\r\n\t_manager->InitDecoder();\r\n\r\n\t// send message\r\n}\r\n\r\n\r\n\r\nvoid PPSession::StopStream()\r\n{\r\n\t_manager->ReleaseDecoder();\r\n\r\n\t//TODO: we should clean all stream relate message left behide\r\n\r\n\t// send message\r\n\r\n}\r\n\r\n\r\nvoid PPSession::connectToServer()\r\n{\r\n\t_connect_state = CONNECTING;\r\n\t//_manager->SetSessionState(SS_CONNECTING);\r\n\t//--------------------------------------------------\r\n\t// define socket\r\n\t_sock = socket(AF_INET, SOCK_STREAM, 0);\r\n\tif (_sock < 0)\r\n\t{\r\n\t\tprintf(\"Can't create new socket.\\n\");\r\n\t\tgfxFlushBuffers();\r\n\t\t// Error: can't create socket\r\n\t\t_connect_state = FAIL;\r\n\t\t_manager->SetSessionState(SS_FAILED);\r\n\t\treturn;\r\n\t}\r\n\r\n\tstruct sockaddr_in addr = { 0 };\r\n\taddr.sin_family = AF_INET;\r\n\tunsigned short nPort = (unsigned short)strtoul(_port, NULL, 0);\r\n\taddr.sin_port = htons(nPort);\r\n\tif(inet_pton(addr.sin_family, _ip, &addr.sin_addr) < 0)\r\n\t{\r\n\t\tprintf(\"IP and Port not supported.\\n\", _ip, _port);\r\n\t\tgfxFlushBuffers();\r\n\t\t_connect_state = FAIL;\r\n\t\t_manager->SetSessionState(SS_FAILED);\r\n\t\treturn;\r\n\t}\r\n\r\n\tprintf(\"Connect to: %s p:%s.\\n\", _ip, _port);\r\n\tgfxFlushBuffers();\r\n\r\n\tint ret = connect(_sock, (struct sockaddr *) &addr, sizeof(addr));\r\n\tif (ret < 0)\r\n\t{\r\n\t\tprintf(\"Could not connect to server.\\n\");\r\n\t\tgfxFlushBuffers();\r\n\t\t_connect_state = FAIL;\r\n\t\t_manager->SetSessionState(SS_FAILED);\r\n\t\treturn;\r\n\t}\r\n\r\n\t_connect_state = CONNECTED;\r\n\t_manager->SetSessionState(SS_CONNECTED);\r\n\r\n\t// set socket to non blocking so we can easy control it\r\n\t//fcntl(sockManager->sock, F_SETFL, O_NONBLOCK);\r\n\tfcntl(_sock, F_SETFL, fcntl(_sock, F_GETFL, 0) | O_NONBLOCK);\r\n\tprintf(\"Connected to server.\\n\");\r\n\tgfxFlushBuffers();\r\n}\r\n\r\nvoid PPSession::closeConnect()\r\n{\r\n\t// set kill again to make sure it set\r\n\r\n\t// close connection\r\n\tif(_connect_state == CONNECTED)\r\n\t{\r\n\t\tResult rc = closesocket(_sock);\r\n\t\tif(rc != 0)\r\n\t\t{\r\n\t\t\tprintf(\"Failed when close socket.\\n\");\r\n\t\t}\r\n\t}\r\n\t_sock = -1;\r\n\t_connect_state = IDLE;\r\n\r\n\tprintf(\"closed session.\\n\");\r\n}\r\n\r\nvoid PPSession::recvSocketData()\r\n{\r\n\t//---------------------------------------------------------------------------------\r\n\t// receive data\r\n\t//---------------------------------------------------------------------------------\r\n\tint recvAmount = recv(_sock, g_receivedBuffer + g_receivedSize, BUFFERSIZE, 0); //TODO: Citra seem like can't use this method and it crash immedately\r\n\tif (recvAmount <= 0)\r\n\t{\r\n\t\tif (errno != EWOULDBLOCK) {\r\n\t\t\tprintf(\"Error receive packet: %d\\n\", recvAmount);\r\n\t\t\t_kill = true;\r\n\t\t}\r\n\t\treturn;\r\n\t}else if(recvAmount > BUFFERSIZE)\r\n\t{\r\n\t\t// sound like something are wrong here so we do notthing\r\n\t\treturn;\r\n\t}else\r\n\t{\r\n\t\tg_receivedSize += recvAmount;\r\n\t}\r\n\t//printf(\"receive packet s:%d - total: %d - w:%d\\n\", recvAmount, g_receivedSize, g_waitForSize);\r\n\t//---------------------------------------------------------------------------------\r\n\t// process data\r\n\t//---------------------------------------------------------------------------------\r\n\tif (g_receivedSize < g_waitForSize || g_waitForSize == 0) return;\r\n\r\n\tint dataAfterProcess = g_receivedSize - g_waitForSize;\r\n\tdo {\r\n\t\t// calculate data left\r\n\t\tu32 lastWaitForSize = g_waitForSize;\r\n\r\n\t\t// process message data\r\n\t\tprocessReceivedMsg(g_receivedBuffer, lastWaitForSize, g_msgTag);\r\n\r\n\t\t// shifting mem\r\n\t\tmemcpy(g_receivedBuffer, g_receivedBuffer + lastWaitForSize, dataAfterProcess);\r\n\r\n\t\t// reset information\r\n\t\tg_receivedSize = dataAfterProcess;\r\n\t\tif(g_receivedSize < g_waitForSize || g_waitForSize == 0) return;\r\n\r\n\t\tdataAfterProcess = g_receivedSize - g_waitForSize;\r\n\t} while (dataAfterProcess >= 0);\r\n}\r\n\r\nvoid PPSession::sendMessageData()\r\n{\r\n\t_queueMessageMutex->Lock();\r\n\twhile(!_sendingMessages.empty())\r\n\t{\r\n\t\t// get top message\r\n\t\tQueueMessage* queueMsg = (QueueMessage*)_sendingMessages.front();\r\n\t\t_sendingMessages.pop();\r\n\r\n\t\tif(queueMsg->msgSize > 0 && queueMsg->msgBuffer != nullptr)\r\n\t\t{\r\n\t\t\tu32 totalSent = 0;\r\n\t\t\t// send message\r\n\t\t\tdo\r\n\t\t\t{\r\n\t\t\t\tint sendAmount = send(_sock, queueMsg->msgBuffer, queueMsg->msgSize, 0);\r\n\t\t\t\tif (sendAmount < 0)\r\n\t\t\t\t{\r\n\t\t\t\t\t// SS_FAILED when send message\r\n\t\t\t\t\tprintf(\"Error when send message.\\n\");\r\n\t\t\t\t\tReleaseSession();\r\n\t\t\t\t\treturn;\r\n\t\t\t\t}\r\n\t\t\t\ttotalSent += sendAmount;\r\n\t\t\t} while (totalSent < queueMsg->msgSize);\r\n\t\t\t//--------------------------------------------------------\r\n\t\t\t// free message\r\n\t\t\tfree(queueMsg->msgBuffer);\r\n\t\t\tdelete queueMsg;\r\n\t\t}\r\n\t}\r\n\t_queueMessageMutex->Unlock();\r\n}\r\n\r\n\r\nvoid PPSession::threadMain()\r\n{\r\n\tif (_running) return;\r\n\t_running = true;\r\n\tu64 sleepDuration = ONE_MILLISECOND * 2;\r\n\t// thread loop\r\n\twhile (!_kill)\r\n\t{\r\n\t\tif (_connect_state == IDLE)\r\n\t\t{\r\n\t\t\tconnectToServer();\r\n\t\t}\r\n\t\tif (_connect_state == CONNECTED)\r\n\t\t{\r\n\t\t\t// check and recv data from server\r\n\t\t\trecvSocketData();\r\n\t\t\t// send queue message\r\n\t\t\tsendMessageData();\r\n\t\t}\r\n\r\n\t\tsvcSleepThread(sleepDuration);\r\n\t}\r\n\r\n\t// send all message left\r\n\tsendMessageData();\r\n\tstd::queue<QueueMessage*>().swap(_sendingMessages);\r\n\tdelete _queueMessageMutex;\r\n\r\n\t// close connection\r\n\tcloseConnect();\r\n}\r\n\r\nvoid PPSession::threadTest()\r\n{\r\n\tif (_running) return;\r\n\t_running = true;\r\n\tu64 sleepDuration = ONE_MILLISECOND * 2;\r\n\t_testConnectionResult = 0;\r\n\t// thread loop\r\n\twhile (!_kill)\r\n\t{\r\n\t\tif (_connect_state == IDLE)\r\n\t\t{\r\n\t\t\tconnectToServer();\r\n\t\t}\r\n\t\tif (_connect_state == CONNECTED)\r\n\t\t{\r\n\t\t\t_testConnectionResult = 1;\r\n\t\t}\r\n\t\tif (_connect_state == FAIL)\r\n\t\t{\r\n\t\t\t_testConnectionResult = -1;\r\n\t\t}\r\n\r\n\t\tsvcSleepThread(sleepDuration);\r\n\t}\r\n\t// close connection\r\n\tcloseConnect();\r\n}\r\n\r\nvoid PPSession::RequestForData(u32 size, u32 tag)\r\n{\r\n\tg_waitForSize = size;\r\n\tg_msgTag = tag;\r\n}\r\n\r\nvoid PPSession::AddMessageToQueue(u8* msgBuffer, int32_t msgSize)\r\n{\r\n\tif (_running && _connect_state == CONNECTED && !_kill) {\r\n\t\t_queueMessageMutex->Lock();\r\n\t\tQueueMessage *msg = new QueueMessage();\r\n\t\tmsg->msgBuffer = msgBuffer;\r\n\t\tmsg->msgSize = msgSize;\r\n\t\t_sendingMessages.push(msg);\r\n\t\t_queueMessageMutex->Unlock();\r\n\t}\r\n}\r\n\r\n\r\nvoid PPSession::processReceivedMsg(u8* buffer, u32 size, u32 tag)\r\n{\r\n\t//printf(\"process receive part size: %d - tag: %d.\\n\", size, tag);\r\n\t//------------------------------------------------------\r\n\t// verify authentication\r\n\t//------------------------------------------------------\r\n\tif (!_authenticated)\r\n\t{\r\n\t\tif (tag == PPREQUEST_AUTHEN)\r\n\t\t{\r\n\t\t\tPPMessage *authenMsg = new PPMessage();\r\n\t\t\tif (authenMsg->ParseHeader(buffer))\r\n\t\t\t\tif (authenMsg->GetMessageCode() == MSG_CODE_RESULT_AUTHENTICATION_SUCCESS)\r\n\t\t\t\t{\r\n\t\t\t\t\tprintf(\"Authenticated successfully.\\n\");\r\n\t\t\t\t\t_authenticated = true;\r\n\r\n\t\t\t\t\t_manager->SetBusyState(BS_NONE);\r\n\t\t\t\t\t// send message to get hub items and other information\r\n\t\t\t\t\tSendMsgRequestHubItems();\r\n\t\t\t\t\treturn;\r\n\t\t\t\t}\r\n\t\t\t\telse printf(\"Authenticaiton failed.\\n\");\r\n\t\t\telse printf(\"Authenticaiton failed.\\n\");\r\n\t\t\tdelete authenMsg;\r\n\t\t}\r\n\t\telse printf(\"Client was not authentication.\\n\");\r\n\t\tRequestForData(MSG_COMMAND_SIZE, PPREQUEST_AUTHEN);\r\n\t\treturn;\r\n\t}\r\n\t//------------------------------------------------------\r\n\t// process data by tag\r\n\tif (!_tmpMessage) _tmpMessage = new PPMessage();\r\n\tswitch (tag)\r\n\t{\r\n\tcase PPREQUEST_HEADER:\r\n\t{\r\n\t\tif (_tmpMessage->ParseHeader(buffer)) {\r\n\t\t\tRequestForData(_tmpMessage->GetContentSize(), PPREQUEST_BODY);\r\n\t\t} else {\r\n\t\t\t_tmpMessage->ClearHeader();\r\n\t\t\tRequestForData(MSG_COMMAND_SIZE, PPREQUEST_HEADER);\r\n\t\t}\r\n\t\tbreak;\r\n\t}\r\n\tcase PPREQUEST_BODY:\r\n\t{\r\n\t\t//// if tmp message is null that mean this is useless data then we avoid it\r\n\t\t//if (_tmpMessage->GetContentSize() == 0) {\r\n\t\t//\t_tmpMessage->ClearHeader();\r\n\t\t//\t// we should prepare request for new header\r\n\t\t//\tRequestForData(MSG_COMMAND_SIZE, PPREQUEST_HEADER);\r\n\t\t//\treturn;\r\n\t\t//}\r\n\t\t// verify buffer size with message estimate size\r\n\t\tif (size == _tmpMessage->GetContentSize())\r\n\t\t{\r\n\t\t\tprocessMessageData(buffer, size);\r\n\t\t\t// Request for next message\r\n\t\t\tRequestForData(MSG_COMMAND_SIZE, PPREQUEST_HEADER);\r\n\t\t}\r\n\t\t//------------------------------------------------------\r\n\t\t// remove message after use\r\n\t\t_tmpMessage->ClearHeader();\r\n\t\tbreak;\r\n\t}\r\n\tdefault: break;\r\n\t}\r\n}\r\n\r\n\r\nvoid PPSession::processMessageData(u8* buffer, size_t size)\r\n{\r\n\t// process message data\tby message type\r\n\tswitch (_tmpMessage->GetMessageCode())\r\n\t{\r\n\tcase MSG_CODE_REQUEST_NEW_SCREEN_FRAME:\r\n\t\t_manager->ProcessVideoFrame(buffer, size);\r\n\t\tbreak;\r\n\tcase MSG_CODE_REQUEST_NEW_AUDIO_FRAME:\r\n\t\t//TODO: currently audio sleep make it all slow\r\n\t\t// should be avoid this by doing that in main thread\r\n\t\t_manager->ProcessAudioFrame(buffer, size);\r\n\t\tbreak;\r\n\tcase MSG_CODE_RECEIVED_HUB_ITEMS:\r\n\t\tprintf(\"Got hub items data\\n\");\r\n\t\t// On get hub items list\r\n\t\tu32 cursor = 0;\r\n\t\tint i = 0;\r\n\t\t// read number of hub items\r\n\t\tu16 count = READ_U16(buffer, cursor); cursor += 2;\r\n\t\tu16 size;\r\n\r\n\t\tprintf(\"Count: %d\\n\", count);\r\n\t\t// read hub items\r\n\t\tfor(i = 0; i < count; ++i)\r\n\t\t{\r\n\t\t\tprintf(\"> Read item: %d\\n\", i);\r\n\t\t\tHubItem *item = new HubItem();\r\n\r\n\t\t\t// read type : 1 bytes\r\n\t\t\tu8 type = READ_U8(buffer, cursor); cursor += 1;\r\n\t\t\titem->type = type;\r\n\t\t\tprintf(\"> Type: %d\\n\", type);\r\n\r\n\t\t\t// read uuid\r\n\t\t\t// size : 2 bytes\r\n\t\t\tsize = READ_U16(buffer, cursor); cursor += 2;\r\n\t\t\titem->uuid.resize(size);\r\n\t\t\tmemcpy(&item->uuid[0], buffer + cursor, size); \r\n\t\t\tcursor += size;\r\n\t\t\tprintf(\"> uuid: %s :%d\\n\", item->uuid.c_str(), size);\r\n\r\n\t\t\t// read name\r\n\t\t\t// size : 2 bytes\r\n\t\t\tsize = READ_U16(buffer, cursor); cursor += 2;\r\n\t\t\titem->name.resize(size);\r\n\t\t\tmemcpy(&item->name[0], buffer + cursor, size); \r\n\t\t\tcursor += size;\r\n\t\t\tprintf(\"> name: %s : %d\\n\", item->name.c_str(), size);\r\n\r\n\t\t\t\r\n\r\n\t\t\tif (type != HUB_SCREEN) {\r\n\t\t\t\t// read thumbnail\r\n\t\t\t\t// size : 4 bytes\r\n\t\t\t\titem->thumbSize = READ_U32(buffer, cursor); \r\n\t\t\t\tcursor += 4;\r\n#ifndef USE_CITRA\r\n\t\t\t\t// only work on N3ds device\r\n\t\t\t\tPPGraphics::Get()->AddCacheImage(&buffer[cursor], item->thumbSize, item->uuid);\r\n#else\r\n\t\t\t\t// Work around because of add direct buffer to cache make image unexpected behaviour\r\n\t\t\t\tstd::string fname = \"pinbox/tmp/\" + item->uuid + \".png\";\r\n\t\t\t\tif (PPGraphics::Get()->AddCacheImage(fname.c_str(), item->uuid) == nullptr) {\r\n\t\t\t\t\tprintf(\"Write cache file: %s\\n\", fname.c_str());\r\n\t\t\t\t\t// save to file\r\n\t\t\t\t\tFILE *f = fopen(fname.c_str(), \"wb\");\r\n\t\t\t\t\tif (f != NULL)\r\n\t\t\t\t\t{\r\n\t\t\t\t\t\tfwrite(buffer + cursor, sizeof(u8), item->thumbSize, f);\r\n\t\t\t\t\t\tfclose(f);\r\n\t\t\t\t\t\tPPGraphics::Get()->AddCacheImage(fname.c_str(), item->uuid);\r\n\t\t\t\t\t}\r\n\t\t\t\t}\r\n#endif\r\n\t\t\t\tcursor += item->thumbSize;\r\n\t\t\t}\r\n\r\n\t\t\t_hubItems.push_back(item);\r\n\t\t}\r\n\r\n\t\t_manager->SetBusyState(BS_NONE);\r\n\t\t_manager->SetSessionState(SS_PAIRED);\r\n\t\tbreak;\r\n\tdefault: \r\n\t\tbreak;\r\n\t}\r\n}\r\n\r\n\r\n//-----------------------------------------------------\r\n// screen capture\r\n//-----------------------------------------------------\r\nvoid PPSession::SendMsgAuthentication()\r\n{\r\n\tPPMessage *msg = new PPMessage();\r\n\tmsg->BuildMessageHeader(MSG_CODE_REQUEST_AUTHENTICATION_SESSION);\r\n\tu8* msgBuffer = msg->BuildMessageEmpty();\r\n\tAddMessageToQueue(msgBuffer, msg->GetMessageSize());\r\n\tRequestForData(MSG_COMMAND_SIZE, PPREQUEST_AUTHEN);\r\n\tdelete msg;\r\n}\r\n\r\nvoid PPSession::SendMsgStartStream()\r\n{\r\n\tif (isSessionStarted) return;\r\n\tPPMessage *msg = new PPMessage();\r\n\tmsg->BuildMessageHeader(MSG_CODE_REQUEST_START_SCREEN_CAPTURE);\r\n\tu8* msgBuffer = msg->BuildMessageEmpty();\r\n\tAddMessageToQueue(msgBuffer, msg->GetMessageSize());\r\n\tisSessionStarted = true;\r\n\tdelete msg;\r\n}\r\n\r\nvoid PPSession::SendMsgStopStream()\r\n{\r\n\tif (!isSessionStarted) return;\r\n\t//--------------------------------------\r\n\tPPMessage *msg = new PPMessage();\r\n\tmsg->BuildMessageHeader(MSG_CODE_REQUEST_STOP_SCREEN_CAPTURE);\r\n\tu8* msgBuffer = msg->BuildMessageEmpty();\r\n\tAddMessageToQueue(msgBuffer, msg->GetMessageSize());\r\n\tisSessionStarted = false;\r\n\tdelete msg;\r\n\t//--------------------------------------\r\n\tReleaseSession();\r\n}\r\n\r\nvoid PPSession::SendMsgChangeSetting()\r\n{\r\n\tPPMessage *authenMsg = new PPMessage();\r\n\tauthenMsg->BuildMessageHeader(MSG_CODE_REQUEST_CHANGE_SETTING_SCREEN_CAPTURE);\r\n\t//-----------------------------------------------\r\n\t// alloc msg content block\r\n\tsize_t contentSize = 13;\r\n\tu8* contentBuffer = (u8*)malloc(sizeof(u8) * contentSize);\r\n\tu8* pointer = contentBuffer;\r\n\t//----------------------------------------------\r\n\t// setting: wait for received frame\r\n\t//u8 _setting_waitToReceivedFrame = ConfigManager::Get()->_cfg_wait_for_received ? 1 : 0;\r\n\t//WRITE_U8(pointer, _setting_waitToReceivedFrame);\r\n\t//// setting: smooth frame number ( only activate if waitForReceivedFrame = true)\r\n\t//WRITE_U32(pointer, ConfigManager::Get()->_cfg_skip_frame);\r\n\t//// setting: frame quality [0 ... 100]\r\n\t//WRITE_U32(pointer, ConfigManager::Get()->_cfg_video_quality);\r\n\t//// setting: frame scale [0 ... 100]\r\n\t//WRITE_U32(pointer, ConfigManager::Get()->_cfg_video_scale);\r\n\t//-----------------------------------------------\r\n\t// build message\r\n\tu8* msgBuffer = authenMsg->BuildMessage(contentBuffer, contentSize);\r\n\tAddMessageToQueue(msgBuffer, authenMsg->GetMessageSize());\r\n\tfree(contentBuffer);\r\n\tdelete authenMsg;\r\n}\r\n\r\nvoid PPSession::SendMsgRequestHubItems()\r\n{\r\n\tif (_connect_state == CONNECTED && _authenticated && _manager->GetBusyState() == BS_NONE) {\r\n\t\tprintf(\"Request for hub items\\n\");\r\n\t\t_manager->SetBusyState(BS_HUB_ITEMS);\r\n\t\t// cleanup items\r\n\t\t_hubItems.clear();\r\n\t\t// send request\r\n\t\tPPMessage *msg = new PPMessage();\r\n\t\tmsg->BuildMessageHeader(MSG_CODE_REQUEST_HUB_ITEMS);\r\n\t\tu8* msgBuffer = msg->BuildMessageEmpty();\r\n\t\tAddMessageToQueue(msgBuffer, msg->GetMessageSize());\r\n\t\tRequestForData(MSG_COMMAND_SIZE, PPREQUEST_HEADER);\r\n\t\tdelete msg;\r\n\t}\r\n}\r\n\r\n//-----------------------------------------------------\r\n// Input\r\n//-----------------------------------------------------\r\nbool PPSession::SendMsgSendInputData(u32 down, u32 up, short cx, short cy, short ctx, short cty)\r\n{\r\n\tif (!isSessionStarted) return;\r\n\tPPMessage *msg = new PPMessage();\r\n\tmsg->BuildMessageHeader(MSG_CODE_SEND_INPUT_CAPTURE);\r\n\t//-----------------------------------------------\r\n\t// alloc msg content block\r\n\tconst size_t contentSize = 16;\r\n\tu8* contentBuffer = (u8*)malloc(sizeof(u8) * contentSize);\r\n\tu8* pointer = contentBuffer;\r\n\t//----------------------------------------------\r\n\tmemcpy(pointer, &down, 4);\r\n\tmemcpy(pointer + 4, &up, 4);\r\n\tmemcpy(pointer + 8, &cx, 2);\r\n\tmemcpy(pointer + 10, &cy, 2);\r\n\tmemcpy(pointer + 12, &ctx, 2);\r\n\tmemcpy(pointer + 14, &cty, 2);\r\n\t//-----------------------------------------------\r\n\t// build message and send\r\n\tu8* msgBuffer = msg->BuildMessage(contentBuffer, contentSize);\r\n\tAddMessageToQueue(msgBuffer, msg->GetMessageSize());\r\n\tdelete msg;\r\n\treturn true;\r\n}"
  },
  {
    "path": "PinBox/PinBox/source/PPSessionManager.cpp",
    "content": "#include \"PPSessionManager.h\"\r\n#include \"PPGraphics.h\"\r\n#include \"ConfigManager.h\"\r\n#include \"Logger.h\"\r\n\r\nPPSessionManager::PPSessionManager()\r\n{\r\n}\r\n\r\nPPSessionManager::~PPSessionManager()\r\n{\r\n}\r\n\r\n///////////////////////////////////////////////////////////////////////////////////////////////////////////\r\n// connection test\r\n///////////////////////////////////////////////////////////////////////////////////////////////////////////\r\n\r\nint PPSessionManager::TestConnection(ServerConfig* config)\r\n{\r\n\tif (_testSession != nullptr) {\r\n\t\tint ret = _testSession->GetTestConnectionResult();\r\n\r\n\t\tprintf(\"RET: %d.\\n\", ret);\r\n\t\tif(ret != 0)\r\n\t\t{\r\n\t\t\tdelete _testSession;\r\n\t\t\t_testSession = NULL;\r\n\t\t\t_sessionState = SS_NOT_CONNECTED;\r\n\t\t}\r\n\t\treturn ret;\r\n\t}\r\n\r\n\tprintf(\"Init TEST connection to server.\\n\");\r\n\t_testSession = new PPSession();\r\n\t_testSession->InitTestSession(this, config->ip.c_str(), config->port.c_str());\r\n\treturn 0;\r\n}\r\n\r\n///////////////////////////////////////////////////////////////////////////////////////////////////////////\r\n// For current displaying video frame\r\n\r\nSessionState PPSessionManager::ConnectToServer(ServerConfig* config)\r\n{\r\n\tif (_session == nullptr)\r\n\t{\r\n\t\tprintf(\"Init connection to server.\\n\");\r\n\t\t_sessionState = SS_CONNECTING;\r\n\t\t_session = new PPSession();\r\n\t\t_session->InitSession(this, config->ip.c_str(), config->port.c_str());\r\n\t}\r\n\treturn _sessionState;\r\n}\r\n\r\nvoid PPSessionManager::DisconnectToServer()\r\n{\r\n\tif (_sessionState == SS_NOT_CONNECTED) return;\r\n\tif (_session == nullptr) return;\r\n\tprintf(\"Cleanup session.\\n\");\r\n\tdelete _session;\r\n\t_session = NULL;\r\n\t_sessionState = SS_NOT_CONNECTED;\r\n}\r\n\r\nvoid PPSessionManager::StartStreaming()\r\n{\r\n\tif (_sessionState != SS_PAIRED) return;\r\n\t//TODO: implement this\r\n}\r\n\r\nvoid PPSessionManager::StopStreaming()\r\n{\r\n\tif (_sessionState != SS_PAIRED) return;\r\n\t//TODO: implement this\r\n}\r\n\r\nvoid PPSessionManager::ProcessVideoFrame(u8* buffer, u32 size)\r\n{\r\n\tu8* rgbBuffer = _decoder->appendVideoBuffer(buffer, size);\r\n\tif (rgbBuffer != nullptr) PPGraphics::Get()->UpdateTopScreenSprite(rgbBuffer, 393216);\r\n\r\n\t//---------------------------------------------------------\r\n\t// update frame video FPS\r\n\t//---------------------------------------------------------\r\n\tif (!_receivedFirstVideoFrame)\r\n\t{\r\n\t\t_receivedFirstVideoFrame = true;\r\n\t\t_lastVideoTime = osGetTime();\r\n\t\t_currentVideoFPS = 0.0f;\r\n\t\t_videoFrame = 0;\r\n\t}\r\n\telse\r\n\t{\r\n\t\t_videoFrame++;\r\n\t\tu64 deltaTime = osGetTime() - _lastVideoTime;\r\n\t\tif (deltaTime > 1000)\r\n\t\t{\r\n\t\t\t_currentVideoFPS = _videoFrame / (deltaTime / 1000.0f);\r\n\t\t\t_videoFrame = 0;\r\n\t\t\t_lastVideoTime = osGetTime();\r\n\t\t}\r\n\t}\r\n}\r\n\r\nvoid PPSessionManager::DrawVideoFrame()\r\n{\r\n\tPPGraphics::Get()->DrawTopScreenSprite();\r\n}\r\n\r\nvoid PPSessionManager::ProcessAudioFrame(u8* buffer, u32 size)\r\n{\r\n\t_decoder->decodeAudioStream(buffer, size);\r\n}\r\n\r\nvoid PPSessionManager::UpdateStreamSetting()\r\n{\r\n\t//_session->SendMsgChangeSetting();\r\n}\r\n\r\nvoid PPSessionManager::GetControllerProfiles()\r\n{\r\n}\r\n\r\nvoid PPSessionManager::Authentication()\r\n{\r\n\tif (_sessionState != SS_CONNECTED || GetBusyState() != BS_NONE) return;\r\n\tSetBusyState(BS_AUTHENTICATION);\r\n\t_session->SendMsgAuthentication();\r\n}\r\n\r\nvoid PPSessionManager::InitDecoder()\r\n{\r\n\t_decoder = new PPDecoder();\r\n\t_decoder->initDecoder();\r\n}\r\n\r\nvoid PPSessionManager::ReleaseDecoder()\r\n{\r\n\t_decoder->releaseDecoder();\r\n}\r\n\r\n\r\nvoid PPSessionManager::UpdateInputStream(u32 down, u32 up, short cx, short cy, short ctx, short cty)\r\n{\r\n\tif (_session == nullptr) return;\r\n\tif(down != _oldDown || up != _oldUp || cx != _oldCX || cy != _oldCY || ctx != _oldCTX || cty != _oldCTY || !_initInputFirstFrame)\r\n\t{\r\n\t\tif(_session->SendMsgSendInputData(down, up, cx, cy, ctx, cty))\r\n\t\t{\r\n\t\t\t_initInputFirstFrame = true;\r\n\t\t\t_oldDown = down;\r\n\t\t\t_oldUp = up;\r\n\t\t\t_oldCX = cx;\r\n\t\t\t_oldCY = cy;\r\n\t\t\t_oldCTX = ctx;\r\n\t\t\t_oldCTY = cty;\r\n\t\t}\r\n\t}\r\n}\r\n\r\n\r\nvoid PPSessionManager::StartFPSCounter()\r\n{\r\n\t_lastRenderTime = osGetTime();\r\n\t_currentRenderFPS = 0.0f;\r\n\t_renderFrames = 0;\r\n}\r\n\r\nvoid PPSessionManager::UpdateFPSCounter()\r\n{\r\n\t_renderFrames++;\r\n\tu64 deltaTime = osGetTime() - _lastRenderTime;\r\n\tif(deltaTime > 1000)\r\n\t{\r\n\t\t_currentRenderFPS = _renderFrames / (deltaTime / 1000.0f);\r\n\t\t_renderFrames = 0;\r\n\t\t_lastRenderTime = osGetTime();\r\n\t}\r\n}\r\n"
  },
  {
    "path": "PinBox/PinBox/source/PPUI.cpp",
    "content": "#include \"PPUI.h\"\r\n#include <cstdio>\r\n#include \"ConfigManager.h\"\r\n#include \"Anim.h\"\r\n\r\nvolatile u32 kDown;\r\nvolatile u32 kHeld;\r\nvolatile u32 kUp;\r\n\r\nvolatile u32 last_kDown;\r\nvolatile u32 last_kHeld;\r\nvolatile u32 last_kUp;\r\n\r\nstatic circlePosition cPos;\r\nstatic circlePosition cStick;\r\nstatic touchPosition kTouch;\r\nstatic touchPosition last_kTouch;\r\nstatic touchPosition first_kTouchDown;\r\nstatic touchPosition last_kTouchDown;\r\nvolatile u64 holdTime = 0;\r\n\r\nstatic u32 sleepModeState = 0;\r\n//----------------------------------------\r\n// Dialog\r\n//---------------------------------------\r\nstatic bool mTmpLockTouch = false;\r\nstatic PopupCallback *mDialogBox;\r\nstatic PopupCallback *mDialogBoxCallLater;\r\nstatic DialogBoxOverride mDialogOverride;\r\n\r\n//----------------------------------------\r\n// Tab\r\n//---------------------------------------\r\nstatic int mPairedScreenTabIdx = 0;\r\nstatic int mHubItemSelectedIdx = -1;\r\n\r\n//----------------------------------------\r\n// Scroll Box\r\n//---------------------------------------\r\nstatic Vector2 mScrollLastTouch = Vector2{ -1,-1 };\r\nstatic float mScrollSpeedModified = 0;\r\nstatic bool mScrolling = false;\r\n\r\nstatic u64 mTmpWaitTimer = 0;\r\nstatic std::vector<PopupCallback> mPopupList;\r\nstatic std::string mTemplateInputString = \"\";\r\nstatic ServerConfig* mTmpServerConfig = nullptr;\r\n\r\n//----------------------------------------\r\n// animations\r\n//----------------------------------------\r\nstatic u64 mTmpLoadingTimeA = 0;\r\nstatic u64 mTmpLoadingTimeB = 0;\r\nstatic u64 mTmpLoadingTimeC = 0;\r\nstatic int mTmpLoadingDirA = 1;\r\nstatic int mTmpLoadingDirB = 1;\r\nstatic int mTmpLoadingDirC = 1;\r\n\r\n\r\nstatic const char* UI_INPUT_VALUE[] = { \"1\", \"2\", \"3\", \r\n\t\t\t\t\t\t\t\t\t\t\"4\", \"5\", \"6\", \r\n\t\t\t\t\t\t\t\t\t\t\"7\", \"8\", \"9\", \r\n\t\t\t\t\t\t\t\t\t\t\".\", \"0\", \":\" };\r\n\r\nstatic const char* UI_KEYBOARD_VALUE[] = { \r\n\t\"q\", \"w\", \"e\", \"r\", \"t\", \"y\", \"u\", \"i\", \"o\", \"p\",\r\n\t\"a\", \"s\", \"d\", \"f\", \"g\", \"h\", \"j\", \"k\", \"l\", \"'\",\r\n\t\"z\", \"x\", \"c\", \"v\", \"b\", \"n\", \"m\", \",\", \".\", \" \",\r\n};\r\n\r\nstatic const char* UI_TABS_PAIRED_SCREEN[] = {\r\n\t\"Hub\",\r\n\t\"Basic Setting\",\r\n\t\"Advance Setting\",\r\n};\r\n\r\nu32 PPUI::getKeyDown()\r\n{\r\n\treturn kDown;\r\n}\r\n\r\nu32 PPUI::getKeyHold()\r\n{\r\n\treturn kHeld;\r\n}\r\n\r\nu32 PPUI::getKeyUp()\r\n{\r\n\treturn kUp;\r\n}\r\n\r\ncirclePosition PPUI::getLeftCircle()\r\n{\r\n\treturn cPos;\r\n}\r\n\r\ncirclePosition PPUI::getRightCircle()\r\n{\r\n\treturn cStick;\r\n}\r\n\r\nu32 PPUI::getSleepModeState()\r\n{\r\n\treturn sleepModeState;\r\n}\r\n\r\nvoid PPUI::UpdateInput()\r\n{\r\n\t//----------------------------------------\r\n\t// store old input\r\n\tlast_kDown = kDown;\r\n\tlast_kHeld = kHeld;\r\n\tlast_kUp = kUp;\r\n\tlast_kTouch = kTouch;\r\n\t//----------------------------------------\r\n\t// scan new input\r\n\tkDown = hidKeysDown();\r\n\tkHeld = hidKeysHeld();\r\n\tkUp = hidKeysUp();\r\n\tcPos = circlePosition();\r\n\thidCircleRead(&cPos);\r\n\tcStick = circlePosition();\r\n\tirrstCstickRead(&cStick);\r\n\tkTouch = touchPosition();\r\n\thidTouchRead(&kTouch);\r\n\r\n\tif(kDown & KEY_TOUCH)\r\n\t{\r\n\t\tfirst_kTouchDown = kTouch;\r\n\t\tlast_kTouchDown = touchPosition();\r\n\t}\r\n\tif(last_kHeld & KEY_TOUCH && kUp & KEY_TOUCH)\r\n\t{\r\n\t\tlast_kTouchDown = kTouch;\r\n\t\tfirst_kTouchDown = touchPosition();\r\n\t\tholdTime = 0;\r\n\t}\r\n}\r\n\r\nbool PPUI::TouchDownOnArea(float x, float y, float w, float h)\r\n{\r\n\tif (kDown & KEY_TOUCH || kHeld & KEY_TOUCH)\r\n\t{\r\n\t\tif (kTouch.px >= (u16)x && kTouch.px <= (u16)(x + w) && kTouch.py >= (u16)y && kTouch.py <= (u16)(y + h))\r\n\t\t{\r\n\t\t\treturn true;\r\n\t\t}\r\n\t}\r\n\treturn false;\r\n}\r\n\r\nbool PPUI::TouchUpOnArea(float x, float y, float w, float h)\r\n{\r\n\tif ((last_kDown & KEY_TOUCH || last_kHeld & KEY_TOUCH) && kUp & KEY_TOUCH)\r\n\t{\r\n\t\tif (last_kTouch.px >= (u16)x && last_kTouch.px <= (u16)(x + w) && last_kTouch.py >= (u16)y && last_kTouch.py <= (u16)(y + h))\r\n\t\t{\r\n\t\t\treturn true;\r\n\t\t}\r\n\t}\r\n\treturn false;\r\n}\r\n\r\nbool PPUI::TouchDown()\r\n{\r\n\treturn kDown & KEY_TOUCH;\r\n}\r\n\r\nbool PPUI::TouchMove()\r\n{\r\n\treturn kHeld & KEY_TOUCH;\r\n}\r\n\r\nbool PPUI::TouchUp()\r\n{\r\n\treturn last_kHeld & KEY_TOUCH && kUp & KEY_TOUCH;\r\n}\r\n\r\n\r\n\r\nvoid PPUI::InitResource()\r\n{\r\n\t// create tmp folder\r\n\tstruct stat st = { 0 };\r\n\tif (stat(\"pinbox\", &st) == -1) mkdir(\"pinbox\", 0700);\t\t\t\t// create root pinbox folder\r\n\tif (stat(\"pinbox/tmp\", &st) == -1) mkdir(\"pinbox/tmp\", 0700);\t\t// create tmp folder for tmp asset cached\r\n\r\n\t// add static resources\r\n\tPPGraphics::Get()->AddCacheImageAsset(\"monitor.png\", \"monitor\");\r\n\tPPGraphics::Get()->AddCacheImageAsset(\"default_1.png\", \"default_1\");\r\n\tPPGraphics::Get()->AddCacheImageAsset(\"default_2.png\", \"default_2\");\r\n\tPPGraphics::Get()->AddCacheImageAsset(\"default_3.png\", \"default_3\");\r\n\tPPGraphics::Get()->AddCacheImageAsset(\"default_4.png\", \"default_4\");\r\n}\r\n\r\nvoid PPUI::CleanupResource()\r\n{\r\n\r\n}\r\n\r\n///////////////////////////////////////////////////////////////////////////\r\n// TEXT\r\n///////////////////////////////////////////////////////////////////////////\r\n\r\nint PPUI::DrawIdleTopScreen(PPSessionManager* sessionManager)\r\n{\r\n\tPPGraphics::Get()->DrawRectangle(0, 0, 400, 240, RGB(26, 188, 156));\r\n\tLabelBox(0, 0, 400, 240, \"PinBox\", RGB(26, 188, 156), RGB(255, 255, 255));\r\n}\r\n\r\nstatic Vector2 scrollboxTestCursor;\r\n\r\nint PPUI::DrawBtmServerSelectScreen(PPSessionManager* sm)\r\n{\r\n\tPPGraphics::Get()->DrawRectangle(0, 0, 320, 240, RGB(236, 240, 241));\r\n\tPPGraphics::Get()->DrawRectangle(0, 0, 320, 35, RGB(26, 188, 156));\r\n\r\n\t// Screen title\r\n\tswitch (sm->GetSessionState()) {\r\n\tcase -1: LabelBox(55, 5, 200, 25, \"Status: No Wifi Connection\", RGB(26, 188, 156), RGB(255, 255, 255)); break;\r\n\tcase 0: LabelBox(55, 5, 200, 25, \"Status: Ready to Connect\", RGB(26, 188, 156), RGB(255, 255, 255)); break;\r\n\tcase 1: LabelBox(55, 5, 200, 25, \"Status: Connecting...\", RGB(26, 188, 156), RGB(255, 255, 255)); break;\r\n\tcase 2: LabelBox(55, 5, 200, 25, \"Status: Connected\", RGB(26, 188, 156), RGB(255, 255, 255)); break;\r\n\t}\r\n\r\n\t// Quit\r\n\tif (FlatColorButton(5, 5, 50, 25, \"Quit\", RGB(214, 48, 49), RGB(255, 118, 117), RGB(255, 255, 255), 6.f))\r\n\t{\r\n\t\tOverrideDialogTypeCritical();\r\n\t\tDrawDialogMessage(sm, \"Warning\", \"Are you sure to quit?\", [=]()\r\n\t\t{\r\n\t\t\t// on cancel\r\n\t\t\treturn -1;\r\n\t\t}, [=]()\r\n\t\t{\r\n\t\t\t// on ok\r\n\t\t\treturn RET_CLOSE_APP;\r\n\t\t});\r\n\t}\r\n\r\n\t// Add new Server\r\n\tif (FlatColorButton(260, 5, 50, 25, \"Add\", RGB(9, 132, 227), RGB(116, 185, 255), RGB(255, 255, 255), 6.f))\r\n\t{\r\n\t\tAddPopup([=]()\r\n\t\t{\r\n\t\t\tif(mTmpServerConfig == nullptr)\r\n\t\t\t{\r\n\t\t\t\tmTmpServerConfig = new ServerConfig();\r\n\t\t\t\tmTmpServerConfig->name = \"My Local PC\";\r\n\t\t\t\tmTmpServerConfig->ip = \"127.0.0.1\";\r\n\t\t\t\tmTmpServerConfig->port = \"1234\";\r\n\t\t\t}\r\n\t\t\t\r\n\t\t\treturn DrawBtmAddNewServerProfileScreen(sm,\r\n\t\t\t\t[=](void* a, void* b)\r\n\t\t\t\t{\r\n\t\t\t\t\tdelete mTmpServerConfig;\r\n\t\t\t\t\tmTmpServerConfig = nullptr;\r\n\t\t\t\t\t// on cancel\r\n\t\t\t\t},\r\n\t\t\t\t[=](void* a, void* b)\r\n\t\t\t\t{\r\n\t\t\t\t\t// add to list\r\n\t\t\t\t\tConfigManager::Get()->servers.push_back(ServerConfig(*mTmpServerConfig));\r\n\t\t\t\t\tConfigManager::Get()->Save(); //TODO: error here when save\r\n\r\n\t\t\t\t\tdelete mTmpServerConfig;\r\n\t\t\t\t\tmTmpServerConfig = nullptr;\r\n\t\t\t\t\t// on ok\r\n\t\t\t\t}\r\n\t\t\t);\r\n\t\t});\r\n\t}\r\n\r\n\t// List servers\r\n\tif(ConfigManager::Get()->servers.size() > 0)\r\n\t{\r\n\t\t//TODO: scroll box\r\n\t\tfloat boxHeight = 40;\r\n\t\tfor(int i = 0; i < ConfigManager::Get()->servers.size(); ++i)\r\n\t\t{\r\n\t\t\tfloat sx = 40 + boxHeight * i + 5 * i;\r\n\t\t\t// Draw BG\r\n\t\t\tPPGraphics::Get()->DrawRectangle(0, sx, 320, boxHeight, RGB(223, 228, 234));\r\n\r\n\t\t\t// Server name\r\n\t\t\tLabelBoxLeft(70, sx + 2, 150, 20, ConfigManager::Get()->servers[i].name.c_str(), TRANSPARENT, RGB(47, 53, 66), 0.7);\r\n\t\t\tLabelBoxLeft(70, sx + 25, 150, 10, (ConfigManager::Get()->servers[i].ip + \":\" + ConfigManager::Get()->servers[i].port).c_str(), TRANSPARENT, RGB(116, 125, 140), 0.45);\r\n\r\n\t\t\t// Connect button\r\n\t\t\tif (FlatColorButton(5, sx + 2, 60, 32, \"Connect\", RGB(46, 213, 115), RGB(123, 237, 159), RGB(47, 53, 66), 6.f)\r\n\t\t\t\t&& sm->GetSessionState() == SS_NOT_CONNECTED)\r\n\t\t\t{\r\n\t\t\t\tmTmpWaitTimer = osGetTime();\r\n\t\t\t\tOverrideDialogTypeInfo();\r\n\t\t\t\tDrawDialogLoading(\"Connecting\", \"Make sure start server on your PC\\nPlease be patient while app working.\", [=]()\r\n\t\t\t\t{\r\n\t\t\t\t\tSessionState state = sm->ConnectToServer(&ConfigManager::Get()->servers[i]);\r\n\t\t\t\t\tswitch (state)\r\n\t\t\t\t\t{\r\n\t\t\t\t\tcase SS_CONNECTED:\r\n\t\t\t\t\t\tif (osGetTime() - mTmpWaitTimer > (1 * TIME_SECOND)) {\r\n\t\t\t\t\t\t\tOverrideDialogTypeSuccess();\r\n\t\t\t\t\t\t\tOverrideDialogContent(\"Authentication\", \"Please wait for handshaking\\nand verify client.\");\r\n\t\t\t\t\t\t\t//send authentication message\r\n\t\t\t\t\t\t\tif (osGetTime() - mTmpWaitTimer > (2 * TIME_SECOND)) {\r\n\t\t\t\t\t\t\t\tsm->Authentication();\r\n\t\t\t\t\t\t\t}\r\n\t\t\t\t\t\t}\r\n\t\t\t\t\t\treturn 0;\r\n\t\t\t\t\tcase SS_PAIRED:\r\n\t\t\t\t\t\t// reset variable of paired screen\r\n\t\t\t\t\t\tmPairedScreenTabIdx = 0;\r\n\t\t\t\t\t\treturn -1;\r\n\t\t\t\t\tcase SS_FAILED:\r\n\t\t\t\t\t\tif (osGetTime() - mTmpWaitTimer > (1 * TIME_SECOND))\r\n\t\t\t\t\t\t{\r\n\t\t\t\t\t\t\tmDialogBoxCallLater = new PopupCallback([=]()\r\n\t\t\t\t\t\t\t{\t\r\n\t\t\t\t\t\t\t\t// We need set session manager back to not connected state\r\n\t\t\t\t\t\t\t\t// because of other state can't go there so it must be SS_FAILED\r\n\t\t\t\t\t\t\t\t\r\n\t\t\t\t\t\t\t\tOverrideDialogTypeCritical();\r\n\t\t\t\t\t\t\t\tDrawDialogMessage(sm, \"Error\", \"Can't connect to server for some reason.\", [=]()\r\n\t\t\t\t\t\t\t\t{\r\n\t\t\t\t\t\t\t\t\tsm->DisconnectToServer();\r\n\t\t\t\t\t\t\t\t\treturn -1;\r\n\t\t\t\t\t\t\t\t});\r\n\t\t\t\t\t\t\t\treturn 0;\r\n\t\t\t\t\t\t\t});\r\n\t\t\t\t\t\t\treturn -1;\r\n\t\t\t\t\t\t}\r\n\t\t\t\t\t\treturn 0;\r\n\t\t\t\t\tdefault:\r\n\t\t\t\t\t\treturn 0;\r\n\t\t\t\t\t}\r\n\t\t\t\t\treturn 0;\r\n\t\t\t\t});\r\n\t\t\t}\r\n\r\n\t\t\t// Remove button\r\n\t\t\tif (FlatColorButton(286, sx + 6, 26, 26, \"X\", RGB(255, 71, 87), RGB(255, 107, 129), RGB(255, 255, 255), 13.f))\r\n\t\t\t{\r\n\t\t\t\tDrawDialogMessage(sm, \"Warning\", \"Are you sure to remove this profile?\", [=]()\r\n\t\t\t\t{\r\n\t\t\t\t\t// on cancel\r\n\t\t\t\t\treturn -1;\r\n\t\t\t\t}, [=]()\r\n\t\t\t\t{\r\n\t\t\t\t\t// on ok\r\n\t\t\t\t\tConfigManager::Get()->servers.erase(ConfigManager::Get()->servers.begin() + i);\r\n\t\t\t\t\tConfigManager::Get()->Save(); \r\n\t\t\t\t\treturn -1;\r\n\t\t\t\t});\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t}else\r\n\t{\r\n\t\t// Draw empty screen and tutorial\r\n\t\tLabelBoxAutoWrap(10, 45, 300, 185, \"No server profile found.\\nPlease add new one by Add button.\", TRANSPARENT, PPGraphics::Get()->PrimaryTextColor);\r\n\t}\r\n\r\n\t// test scroll box\r\n\tscrollboxTestCursor = ScrollBox(10, 100, 300, 100, D_HORIZONTAL, scrollboxTestCursor, [=]()\r\n\t{\r\n\t\tPPGraphics::Get()->DrawRectangle(10, 100, 300, 100, RGB(240, 147, 43));\r\n\t\tWH wh{ 1500, 300 };\r\n\t\treturn wh;\r\n\t});\r\n\r\n\t// Dialog box ( alway at bottom so it will draw on top )\r\n\treturn DrawDialogBox(sm);\r\n}\r\n\r\nint PPUI::DrawBtmAddNewServerProfileScreen(PPSessionManager* sessionManager, ResultCallback cancel, ResultCallback ok)\r\n{\r\n\tPPGraphics::Get()->DrawRectangle(0, 0, 320, 240, RGB(236, 240, 241));\r\n\tPPGraphics::Get()->DrawRectangle(0, 0, 320, 25, RGB(26, 188, 156));\r\n\r\n\t// Screen title\r\n\tLabelBox(0, 5, 320, 20, \"Add New Server\", RGB(26, 188, 156), RGB(255, 255, 255));\r\n\r\n\t// Input server name\r\n\tLabelBoxLeft(5, 30, 100, 30, \"Server Name\", TRANSPARENT, RGB(44, 62, 80));\r\n\tif (LabelBox(105, 30, 210, 30, mTmpServerConfig->name.c_str(), PPGraphics::Get()->PrimaryColor, RGB(255, 255, 255), 0.5f, 6.f))\r\n\t{\r\n\t\tmTemplateInputString = std::string(mTmpServerConfig->name);\r\n\t\tDrawDialogKeyboard([=](void* a, void* b) {},\r\n\t\t[=](void* a, void* b)\r\n\t\t{\r\n\t\t\t// ok\r\n\t\t\tmTmpServerConfig->name.clear();\r\n\t\t\tmTmpServerConfig->name.append(mTemplateInputString);\r\n\t\t\tmTemplateInputString = \"\";\r\n\t\t});\r\n\t}\r\n\r\n\t// Input server ip\r\n\tLabelBoxLeft(5, 70, 100, 30, \"Server IP\", TRANSPARENT, RGB(44, 62, 80));\r\n\tif (LabelBox(105, 70, 210, 30, mTmpServerConfig->ip.c_str(), PPGraphics::Get()->PrimaryColor, RGB(255, 255, 255), 0.5f, 6.f))\r\n\t{\r\n\t\tmTemplateInputString = std::string(mTmpServerConfig->ip);\r\n\t\tDrawDialogNumberInput([=](void* a, void* b) {},\r\n\t\t\t[=](void* a, void* b)\r\n\t\t{\r\n\t\t\t// ok\r\n\t\t\tmTmpServerConfig->ip.clear();\r\n\t\t\tmTmpServerConfig->ip.append(mTemplateInputString);\r\n\t\t\tmTemplateInputString = \"\";\r\n\t\t});\r\n\t}\r\n\r\n\r\n\t// Input server port\r\n\tLabelBoxLeft(5, 110, 100, 30, \"Server Port\", TRANSPARENT, RGB(44, 62, 80));\r\n\tif (LabelBox(105, 110, 210, 30, mTmpServerConfig->port.c_str(), PPGraphics::Get()->PrimaryColor, RGB(255, 255, 255), 0.5f, 6.f))\r\n\t{\r\n\t\tmTemplateInputString = std::string(mTmpServerConfig->port);\r\n\t\tDrawDialogNumberInput([=](void* a, void* b) {},\r\n\t\t\t[=](void* a, void* b)\r\n\t\t{\r\n\t\t\t// ok\r\n\t\t\tmTmpServerConfig->port.clear();\r\n\t\t\tmTmpServerConfig->port.append(mTemplateInputString);\r\n\t\t\tmTemplateInputString = \"\";\r\n\t\t});\r\n\t}\r\n\r\n\r\n\t// Cancel button\r\n\tif (FlatColorButton(10, 200, 50, 30, \"Cancel\", RGB(192, 57, 43), RGB(231, 76, 60), RGB(223, 228, 234), 6.f))\r\n\t{\r\n\t\tClosePopup();\r\n\t\tif (cancel != nullptr) cancel(nullptr, nullptr);\r\n\t}\r\n\r\n\r\n\t// OK button\r\n\tif (FlatColorButton(260, 200, 50, 30, \"OK\", RGB(41, 128, 185), RGB(52, 152, 219), RGB(223, 228, 234), 6.f))\r\n\t{\r\n\t\tmTmpWaitTimer = osGetTime();\r\n\t\tOverrideDialogTypeInfo();\r\n\t\tDrawDialogLoading(\"Testing connection\", \"Make sure start server on your PC\\nPlease be patient while app working\", [=]()\r\n\t\t{\r\n\t\t\tif (osGetTime() - mTmpWaitTimer > (2 * TIME_SECOND))\r\n\t\t\t{\r\n\t\t\t\t// return -1 for finish loading\r\n\t\t\t\tint ret = sessionManager->TestConnection(mTmpServerConfig);\r\n\t\t\t\tif (ret == 1)\r\n\t\t\t\t{\r\n\t\t\t\t\tmDialogBoxCallLater = new PopupCallback([=]()\r\n\t\t\t\t\t{\r\n\t\t\t\t\t\tOverrideDialogTypeSuccess();\r\n\t\t\t\t\t\tDrawDialogMessage(sessionManager, \"Good Work\", \"Connection to server seem good!\");\r\n\r\n\t\t\t\t\t\tClosePopup();\r\n\t\t\t\t\t\tif (ok != nullptr) ok(nullptr, nullptr);\r\n\r\n\t\t\t\t\t\treturn 0;\r\n\t\t\t\t\t});\r\n\t\t\t\t\treturn -1;\r\n\t\t\t\t}\r\n\t\t\t\telse if (ret == -1)\r\n\t\t\t\t{\r\n\t\t\t\t\tmDialogBoxCallLater = new PopupCallback([=]()\r\n\t\t\t\t\t{\r\n\t\t\t\t\t\tOverrideDialogTypeCritical();\r\n\t\t\t\t\t\tDrawDialogMessage(sessionManager, \"Error\", \"Can't Connect to server!\");\r\n\r\n\t\t\t\t\t\treturn 0;\r\n\t\t\t\t\t});\r\n\t\t\t\t\treturn -1;\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t\treturn 0;\r\n\t\t});\r\n\t}\r\n\r\n\t// Dialog box ( alway at bottom so it will draw on top )\r\n\treturn DrawDialogBox(sessionManager);\r\n}\r\n\r\nint PPUI::DrawBtmPairedScreen(PPSessionManager* sm)\r\n{\r\n\tPPGraphics::Get()->DrawRectangle(0, 0, 320, 240, RGB(236, 240, 241));\r\n\tPPGraphics::Get()->DrawRectangle(0, 0, 320, 35, RGB(178, 190, 195));\r\n\r\n\t//TODO: need loop check for wifi status\r\n\t// if wifi was turn off we need close connection and return to Select Server Screen\r\n\r\n\t// Disconnect to server\r\n\tif (FlatColorButton(5, 5, 80, 25, \"Disconnect\", RGB(214, 48, 49), RGB(255, 118, 117), RGB(255, 255, 255), 6.f))\r\n\t{\r\n\t\tOverrideDialogTypeCritical();\r\n\t\tDrawDialogMessage(sm, \"Warning\", \"Are you sure to disconnect?\", [=]()\r\n\t\t{\r\n\t\t\t// on cancel\r\n\t\t\treturn -1;\r\n\t\t}, [=]()\r\n\t\t{\r\n\t\t\t// on ok\r\n\t\t\tsm->DisconnectToServer();\r\n\t\t\treturn -1;\r\n\t\t});\r\n\t}\r\n\r\n\t// LOGO\r\n\tLabelBox(130, 0, 60, 35, \"PinBox\", TRANSPARENT, RGB(47, 53, 66), 0.9f);\r\n\r\n\t// Stream / Stop button\r\n\t// only display when selected an item and in hub tab \r\n\tif (mHubItemSelectedIdx >= 0 && mPairedScreenTabIdx == 0) {\r\n\t\tif (FlatColorButton(240, 5, 70, 25, \"Stream\", RGB(46, 213, 115), RGB(123, 237, 159), RGB(47, 53, 66), 6.f))\r\n\t\t{\r\n\t\t\t\r\n\t\t}\r\n\t}\r\n\r\n\t// Case selection\r\n\tif (sm ->GetSessionState() == SS_PAIRED)\r\n\t{\r\n\t\t// Draw config tabs\r\n\r\n\t\t// Tab 1: Stream mode / Stream quality ( basic setting )\r\n\t\tint ret = DrawTabs(UI_TABS_PAIRED_SCREEN, 3, mPairedScreenTabIdx, 0, 40, 320, 200);\r\n\t\tswitch (ret)\r\n\t\t{\r\n\t\tcase 0: \r\n\t\t\tmPairedScreenTabIdx = ret;\r\n\t\t\tbreak;\r\n\t\tcase 1:\r\n\t\t\tmPairedScreenTabIdx = ret;\r\n\t\t\tbreak;\r\n\t\tcase 2:\r\n\t\t\tOverrideDialogTypeCritical();\r\n\t\t\tDrawDialogMessage(sm, \"Warning\", \"Modified setting in here maybe cause\\ncrash or unexpected behaviour\\nMake sure you know what you doing.\", [=]()\r\n\t\t\t{\r\n\t\t\t\t// on cancel\r\n\t\t\t\treturn -1;\r\n\t\t\t}, [=]()\r\n\t\t\t{\r\n\t\t\t\t// on ok\r\n\t\t\t\tmPairedScreenTabIdx = ret;\r\n\t\t\t\treturn -1;\r\n\t\t\t});\r\n\t\t\tbreak;\r\n\t\t}\r\n\r\n\t\tswitch (mPairedScreenTabIdx)\r\n\t\t{\r\n\t\tcase 0: {\r\n\t\t\t//==========================================================\r\n\t\t\t// HUB ITEMS SCREEN\r\n\t\t\t//==========================================================\r\n\t\t\tint hubCount = sm->GetHubItemCount();\r\n\t\t\tint posY = 0;\r\n\t\t\tint posX = 0;\r\n\t\t\tfor (int i = 0; i < hubCount; ++i)\r\n\t\t\t{\r\n\t\t\t\tHubItem* hItem = sm->GetHubItem(i);\r\n\r\n\t\t\t\t// get sprite\r\n\t\t\t\tSprite* hSprite = nullptr;\r\n\t\t\t\tif (hItem->type == HUB_SCREEN)\r\n\t\t\t\t{\r\n\t\t\t\t\thSprite = PPGraphics::Get()->GetCacheImage(\"monitor\");\r\n\t\t\t\t}\r\n\t\t\t\telse\r\n\t\t\t\t{\r\n\t\t\t\t\thSprite = PPGraphics::Get()->GetCacheImage(hItem->uuid.c_str());\r\n\t\t\t\t}\r\n\r\n\t\t\t\t// display config\r\n\t\t\t\tint itemSpaceX = 48;\r\n\t\t\t\tint itemSpaceY = 32;\r\n\t\t\t\tint thumbSize = 48;\r\n\t\t\t\tint cx = 0, cy = 75;\r\n\t\t\t\tint ix = cx + 25 + posX;\r\n\t\t\t\tint iy = cy + 5 + (thumbSize + itemSpaceY) * posY;\r\n\r\n\t\t\t\t// draw background\r\n\t\t\t\tif(SelectBox(ix - 15, iy - 5, thumbSize + 30, thumbSize + 25, mHubItemSelectedIdx == i ? RGB(30, 144, 255) : RGB(241, 242, 246), 6.0f))\r\n\t\t\t\t{\r\n\t\t\t\t\tif (mHubItemSelectedIdx != i) mHubItemSelectedIdx = i;\r\n\t\t\t\t\telse mHubItemSelectedIdx = -1;\r\n\t\t\t\t}\r\n\r\n\t\t\t\t// draw image\r\n\t\t\t\tPPGraphics::Get()->DrawImage(hSprite, ix, iy, thumbSize, thumbSize);\r\n\t\t\t\tLabelBox(ix, iy + thumbSize, thumbSize, 25, hItem->name.c_str(), TRANSPARENT, mHubItemSelectedIdx == i ? RGB(241, 242, 246) : RGB(47, 53, 66));\r\n\r\n\t\t\t\t//--- move along\r\n\t\t\t\tposY++;\r\n\t\t\t\tif (posY >= 2) {\r\n\t\t\t\t\tposY = 0;\r\n\t\t\t\t\tposX += thumbSize + itemSpaceX;\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t\tbreak;\r\n\t\t}\r\n\t\tcase 1: {\r\n\t\t\t//==========================================================\r\n\t\t\t// BASIC CONFIG SCREEN\r\n\t\t\t//==========================================================\r\n\t\t\tHubItem* hItem = sm->GetHubItem(2);\r\n\t\t\tSprite* hSprite = PPGraphics::Get()->GetCacheImage(hItem->uuid.c_str());\r\n\t\t\tPPGraphics::Get()->DrawImage(hSprite, 100, 100, 48, 48);\r\n\t\t\tbreak;\r\n\t\t}\r\n\t\tcase 2: {\r\n\t\t\t//==========================================================\r\n\t\t\t// ADVANCE CONFIG SCREEN\r\n\t\t\t//==========================================================\r\n\t\t\tHubItem* hItem = sm->GetHubItem(3);\r\n\t\t\tSprite* hSprite = PPGraphics::Get()->GetCacheImage(hItem->uuid.c_str());\r\n\t\t\tPPGraphics::Get()->DrawImage(hSprite, 100, 100, 48, 48);\r\n\t\t\tbreak;\r\n\t\t}\r\n\t\t}\r\n\r\n\t} else if(sm->GetSessionState() == SS_STREAMING) {\r\n\r\n\t}\r\n\r\n\t// Dialog box ( alway at bottom so it will draw on top )\r\n\treturn DrawDialogBox(sm);\r\n}\r\n\r\n\r\n\r\nvoid PPUI::OverrideDialogTypeWarning()\r\n{\r\n\tmDialogOverride.isActivate = true;\r\n\tmDialogOverride.TitleBgColor = RGB(255, 127, 80);\r\n\tmDialogOverride.TitleTextColor = RGB(47, 53, 66);\r\n}\r\n\r\nvoid PPUI::OverrideDialogTypeInfo()\r\n{\r\n\tmDialogOverride.isActivate = true;\r\n\tmDialogOverride.TitleBgColor = RGB(112, 161, 255);\r\n\tmDialogOverride.TitleTextColor = RGB(47, 53, 66);\r\n}\r\n\r\nvoid PPUI::OverrideDialogTypeSuccess()\r\n{\r\n\tmDialogOverride.isActivate = true;\r\n\tmDialogOverride.TitleBgColor = RGB(123, 237, 159);\r\n\tmDialogOverride.TitleTextColor = RGB(47, 53, 66);\r\n}\r\n\r\nvoid PPUI::OverrideDialogTypeCritical()\r\n{\r\n\tmDialogOverride.isActivate = true;\r\n\tmDialogOverride.TitleBgColor = RGB(255, 71, 87);\r\n\tmDialogOverride.TitleTextColor = RGB(47, 53, 66);\r\n}\r\n\r\nvoid PPUI::OverrideDialogContent(const char* title, const char* body)\r\n{\r\n\tmDialogOverride.isActivate = true;\r\n\tmDialogOverride.Title = title;\r\n\tmDialogOverride.Body = body;\r\n}\r\n\r\nint PPUI::DrawDialogKeyboard(ResultCallback cancelCallback, ResultCallback okCallback)\r\n{\r\n\tif (mDialogBox) return 0;\r\n\tmDialogBox = new PopupCallback([=]()\r\n\t{\r\n\t\t// draw background\r\n\t\tPPGraphics::Get()->DrawRectangle(0, 0, 320, 240, PPGraphics::Get()->TransBackgroundDark);\r\n\r\n\t\t// draw dialog box\r\n\t\tfloat boxY = 30;\r\n\t\tfloat boxHeight = 180;\r\n\t\tPPGraphics::Get()->DrawRectangle(13, boxY - 2, 294, boxHeight + 4, mDialogOverride.isActivate ? mDialogOverride.TitleBgColor : PPGraphics::Get()->PrimaryColor);\r\n\t\tPPGraphics::Get()->DrawRectangle(15, boxY, 290, boxHeight, RGB(247, 247, 247));\r\n\r\n\t\t// Input display\r\n\t\tLabelBox(22, boxY + 15, 276, 30, mTemplateInputString.c_str(), mDialogOverride.isActivate ? mDialogOverride.TitleBgColor : PPGraphics::Get()->PrimaryColor, RGB(247, 247, 247));\r\n\r\n\t\t// Keyboard Number\r\n\t\tint kW = 28;\r\n\t\tfloat startX = 22, startY = 60 + boxY;\r\n\t\tfor (int c = 0; c < 10; c++)\r\n\t\t{\r\n\t\t\tfor (int r = 0; r < 3; r++)\r\n\t\t\t{\r\n\t\t\t\tif (FlatButton(startX + c * kW, startY + r * kW, kW - 4, kW - 4, UI_KEYBOARD_VALUE[c + r * 10]))\r\n\t\t\t\t{\r\n\t\t\t\t\tchar v = *UI_KEYBOARD_VALUE[c + r * 10];\r\n\t\t\t\t\tmTemplateInputString.push_back(v);\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\t// Cancel button\r\n\t\tif (FlatColorButton(22, boxY + boxHeight - 30, 56, 25, \"Cancel\",\r\n\t\t\tPPGraphics::Get()->AccentColor, PPGraphics::Get()->AccentDarkColor, PPGraphics::Get()->AccentTextColor))\r\n\t\t{\r\n\t\t\tmTemplateInputString = \"\";\r\n\t\t\tcancelCallback(nullptr, nullptr);\r\n\t\t\treturn -1;\r\n\t\t}\r\n\r\n\t\t// Delete Button\r\n\t\tif (FlatColorButton(192, boxY + boxHeight - 30, 56, 25, \"Delete\",\r\n\t\t\tPPGraphics::Get()->PrimaryColor, PPGraphics::Get()->PrimaryDarkColor, RGB(247, 247, 247)))\r\n\t\t{\r\n\t\t\tif (mTemplateInputString.size() > 0)\r\n\t\t\t{\r\n\t\t\t\tmTemplateInputString.erase(mTemplateInputString.end() - 1);\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\t// OK Button\r\n\t\tif (FlatColorButton(258, boxY + boxHeight - 30, 40, 25, \"OK\",\r\n\t\t\tPPGraphics::Get()->PrimaryColor, PPGraphics::Get()->PrimaryDarkColor, RGB(247, 247, 247)))\r\n\t\t{\r\n\t\t\tokCallback(nullptr, nullptr);\r\n\t\t\treturn -1;\r\n\t\t}\r\n\t\t\r\n\t\treturn 0;\r\n\t});\r\n\treturn 0;\r\n}\r\n\r\nint PPUI::DrawDialogNumberInput(ResultCallback cancelCallback, ResultCallback okCallback)\r\n{\r\n\tif (mDialogBox) return 0;\r\n\tmDialogBox = new PopupCallback([=]()\r\n\t{\r\n\t\t// draw background\r\n\t\tPPGraphics::Get()->DrawRectangle(0, 0, 320, 240, PPGraphics::Get()->TransBackgroundDark);\r\n\r\n\t\t// draw dialog box\r\n\t\tfloat boxY = 30;\r\n\t\tfloat boxHeight = 180;\r\n\t\tPPGraphics::Get()->DrawRectangle(13, boxY - 2, 294, boxHeight + 4, mDialogOverride.isActivate ? mDialogOverride.TitleBgColor : PPGraphics::Get()->PrimaryColor);\r\n\t\tPPGraphics::Get()->DrawRectangle(15, boxY, 290, boxHeight, RGB(247, 247, 247));\r\n\r\n\t\t// Input display\r\n\t\tLabelBox(22, boxY + 15, 276, 30, mTemplateInputString.c_str(), mDialogOverride.isActivate ? mDialogOverride.TitleBgColor : PPGraphics::Get()->PrimaryColor, RGB(247, 247, 247));\r\n\r\n\t\t// Keyboard Number\r\n\t\tint kW = 28;\r\n\t\tfloat startX = 22, startY = 60 + boxY;\r\n\t\tfor (int c = 0; c < 3; c++)\r\n\t\t{\r\n\t\t\tfor (int r = 0; r < 4; r++)\r\n\t\t\t{\r\n\t\t\t\tif (FlatButton(startX + c * kW, startY + r * kW, kW - 4, kW - 4, UI_INPUT_VALUE[c + r * 3]))\r\n\t\t\t\t{\r\n\t\t\t\t\tchar v = *UI_INPUT_VALUE[c + r * 3];\r\n\t\t\t\t\tmTemplateInputString.push_back(v);\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\t// Cancel button\r\n\t\tif (FlatColorButton(238, boxY + boxHeight - 30, 56, 25, \"Cancel\",\r\n\t\t\tPPGraphics::Get()->AccentColor, PPGraphics::Get()->AccentDarkColor, PPGraphics::Get()->AccentTextColor))\r\n\t\t{\r\n\t\t\tmTemplateInputString = \"\";\r\n\t\t\tcancelCallback(nullptr, nullptr);\r\n\t\t\treturn -1;\r\n\t\t}\r\n\r\n\t\t// Delete Button\r\n\t\tif (FlatColorButton(192, startY, 56, 25, \"Delete\",\r\n\t\t\tPPGraphics::Get()->PrimaryColor, PPGraphics::Get()->PrimaryDarkColor, RGB(247, 247, 247)))\r\n\t\t{\r\n\t\t\tif (mTemplateInputString.size() > 0)\r\n\t\t\t{\r\n\t\t\t\tmTemplateInputString.erase(mTemplateInputString.end() - 1);\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\t// OK Button\r\n\t\tif (FlatColorButton(258, startY, 40, 25, \"OK\",\r\n\t\t\tPPGraphics::Get()->PrimaryColor, PPGraphics::Get()->PrimaryDarkColor, RGB(247, 247, 247)))\r\n\t\t{\r\n\t\t\tokCallback(nullptr, nullptr);\r\n\t\t\treturn -1;\r\n\t\t}\r\n\r\n\t\treturn 0;\r\n\t});\r\n\treturn 0;\r\n}\r\n\r\n\r\nint PPUI::DrawDialogLoading(const char* title, const char* body, PopupCallback callback)\r\n{\r\n\tif (mDialogBox) return 0;\r\n\tmTmpLoadingDirA = 1;\r\n\tmTmpLoadingDirB = 1;\r\n\tmTmpLoadingTimeA = osGetTime();\r\n\tmTmpLoadingTimeB = osGetTime();\r\n\tmDialogBox = new PopupCallback([=]()\r\n\t{\r\n\t\tVector3 bodySize = PPGraphics::Get()->GetTextSizeAutoWrap(body, 0.5, 0.5, 280);\r\n\t\tfloat popupHeight = bodySize.y + 40 + 36;\r\n\t\tif (popupHeight > 220) popupHeight = 220;\r\n\r\n\t\tfloat spaceY = (240.0f - popupHeight) / 2.0f;\r\n\r\n\t\t// draw background\r\n\t\tPPGraphics::Get()->DrawRectangle(0, 0, 320, 240, PPGraphics::Get()->TransBackgroundDark);\r\n\r\n\t\t// draw dialog box\r\n\t\tPPGraphics::Get()->DrawRectangle(13, spaceY, 294, popupHeight + 4, mDialogOverride.isActivate ? mDialogOverride.TitleBgColor : PPGraphics::Get()->PrimaryColor, 6.0f);\r\n\t\tPPGraphics::Get()->DrawRectangle(15, spaceY + 2, 290, popupHeight, RGB(247, 247, 247), 6.0f);\r\n\r\n\t\t// draw title\r\n\t\tLabelBox(15, spaceY + 2, 290, 30, mDialogOverride.isActivate && mDialogOverride.Title != nullptr ? mDialogOverride.Title : title, mDialogOverride.isActivate ? mDialogOverride.TitleBgColor : PPGraphics::Get()->PrimaryColor, PPGraphics::Get()->PrimaryTextColor, 0.8);\r\n\t\tLabelBoxAutoWrap(20, spaceY + 40, 280, bodySize.y, mDialogOverride.isActivate && mDialogOverride.Body ? mDialogOverride.Body : body, RGB(255, 255, 255), PPGraphics::Get()->PrimaryTextColor);\r\n\r\n\r\n\t\t// animation part\r\n\t\tfloat minX = 60, maxX = 250;\r\n\t\tfloat minW = 15, maxW = 55;\r\n\t\tdouble duration = 1.0 * 1000ULL;\r\n\t\tdouble durationW = duration / 2.0;\r\n\r\n\t\tu64 timePassA = osGetTime() - mTmpLoadingTimeA;\r\n\t\tu64 timePassB = osGetTime() - mTmpLoadingTimeB;\r\n\r\n\t\tdouble p = (double)timePassA / duration;\r\n\t\tif (mTmpLoadingDirA == -1) p = 1.0 - p;\r\n\t\tdouble e = getEasingFunction(EaseInOutExpo)(p);\r\n\t\tfloat currentX = minX + e * (maxX - minX);\r\n\r\n\t\tdouble pW = (double)timePassB / durationW;\r\n\t\tif (mTmpLoadingDirB == -1) pW = 1.0 - pW;\r\n\t\tdouble eW = getEasingFunction(EaseInQuint)(pW);\r\n\t\tfloat currentW = minW + eW * (maxW - minW);\r\n\r\n\t\tPPGraphics::Get()->DrawRectangle(currentX, spaceY + 40 + bodySize.y + 4, currentW, 15, RGB(255, 165, 2), 7.5f);\r\n\r\n\t\tif (timePassA >= duration)\r\n\t\t{\r\n\t\t\tmTmpLoadingTimeA = osGetTime();\r\n\t\t\tmTmpLoadingDirA *= -1;\r\n\t\t}\r\n\r\n\t\tif (timePassB >= durationW)\r\n\t\t{\r\n\t\t\tmTmpLoadingTimeB = osGetTime();\r\n\t\t\tmTmpLoadingDirB *= -1;\r\n\t\t}\r\n\r\n\t\treturn callback();\r\n\t});\r\n\treturn 0;\r\n}\r\n\r\nint PPUI::DrawDialogMessage(PPSessionManager* sessionManager, const char* title, const char* body)\r\n{\r\n\tif (mDialogBox) return 0;\r\n\tmDialogBox = new PopupCallback([=]()\r\n\t{\r\n\t\tVector3 bodySize = PPGraphics::Get()->GetTextSizeAutoWrap(body, 0.5, 0.5, 280);\r\n\t\tfloat popupHeight = bodySize.y + 40 + 36;\r\n\t\tif (popupHeight > 220) popupHeight = 220;\r\n\r\n\t\tfloat spaceY = (240.0f - popupHeight) / 2.0f;\r\n\r\n\t\t// draw background\r\n\t\tPPGraphics::Get()->DrawRectangle(0, 0, 320, 240, PPGraphics::Get()->TransBackgroundDark);\r\n\r\n\t\t// draw dialog box\r\n\t\tPPGraphics::Get()->DrawRectangle(13, spaceY, 294, popupHeight + 4, mDialogOverride.isActivate ? mDialogOverride.TitleBgColor : PPGraphics::Get()->PrimaryColor, 6.0f);\r\n\t\tPPGraphics::Get()->DrawRectangle(15, spaceY + 2, 290, popupHeight, RGB(247, 247, 247), 6.0f);\r\n\r\n\t\t// draw title\r\n\t\tLabelBox(15, spaceY + 2, 290, 30, title, mDialogOverride.isActivate ? mDialogOverride.TitleBgColor : PPGraphics::Get()->PrimaryColor, PPGraphics::Get()->PrimaryTextColor, 0.8);\r\n\t\tLabelBoxAutoWrap(20, spaceY + 40, 280, bodySize.y, body, RGB(255, 255, 255), PPGraphics::Get()->PrimaryTextColor);\r\n\r\n\t\t// draw button close\r\n\t\tif(FlatColorButton(135, spaceY + 40 + bodySize.y + 4, 50, 30, \"Close\", \r\n\t\t\tPPGraphics::Get()->AccentColor, PPGraphics::Get()->AccentDarkColor, PPGraphics::Get()->AccentTextColor, 6.0f))\r\n\t\t{\r\n\t\t\treturn -1;\r\n\t\t}\r\n\t\treturn 0;\r\n\t});\r\n\treturn 0;\r\n}\r\n\r\nint PPUI::DrawDialogMessage(PPSessionManager* sessionManager, const char* title, const char* body,\r\n\tPopupCallback closeCallback)\r\n{\r\n\tif (mDialogBox) return 0;\r\n\tmDialogBox = new PopupCallback([=]()\r\n\t{\r\n\t\tVector3 bodySize = PPGraphics::Get()->GetTextSizeAutoWrap(body, 0.5, 0.5, 280);\r\n\t\tfloat popupHeight = bodySize.y + 40 + 36;\r\n\t\tif (popupHeight > 220) popupHeight = 220;\r\n\r\n\t\tfloat spaceY = (240.0f - popupHeight) / 2.0f;\r\n\r\n\t\t// draw background\r\n\t\tPPGraphics::Get()->DrawRectangle(0, 0, 320, 240, PPGraphics::Get()->TransBackgroundDark);\r\n\r\n\t\t// draw dialog box\r\n\t\tPPGraphics::Get()->DrawRectangle(13, spaceY, 294, popupHeight + 4, mDialogOverride.isActivate ? mDialogOverride.TitleBgColor : PPGraphics::Get()->PrimaryColor, 6.0f);\r\n\t\tPPGraphics::Get()->DrawRectangle(15, spaceY + 2, 290, popupHeight, RGB(247, 247, 247), 6.0f);\r\n\r\n\t\t// draw title\r\n\t\tLabelBox(15, spaceY + 2, 290, 30, title, mDialogOverride.isActivate ? mDialogOverride.TitleBgColor : PPGraphics::Get()->PrimaryColor, PPGraphics::Get()->PrimaryTextColor, 0.8);\r\n\t\tLabelBoxAutoWrap(20, spaceY + 40, 280, bodySize.y, body, RGB(255, 255, 255), PPGraphics::Get()->PrimaryTextColor);\r\n\r\n\t\t// draw button close\r\n\t\tif (FlatColorButton(135, spaceY + 40 + bodySize.y + 4, 50, 30, \"Close\",\r\n\t\t\tPPGraphics::Get()->AccentColor, PPGraphics::Get()->AccentDarkColor, PPGraphics::Get()->AccentTextColor, 6.0f))\r\n\t\t{\r\n\t\t\treturn closeCallback();\r\n\t\t}\r\n\t\treturn 0;\r\n\t});\r\n\treturn 0;\r\n}\r\n\r\nint PPUI::DrawDialogMessage(PPSessionManager* sessionManager, const char* title, const char* body,\r\n\tPopupCallback cancelCallback, PopupCallback okCallback)\r\n{\r\n\tif (mDialogBox) return 0;\r\n\tmDialogBox = new PopupCallback([=]()\r\n\t{\r\n\t\tVector3 bodySize = PPGraphics::Get()->GetTextSizeAutoWrap(body, 0.5, 0.5, 280);\r\n\t\tfloat popupHeight = bodySize.y + 40 + 36;\r\n\t\tif (popupHeight > 220) popupHeight = 220;\r\n\r\n\t\tfloat spaceY = (240.0f - popupHeight) / 2.0f;\r\n\r\n\t\t// draw background\r\n\t\tPPGraphics::Get()->DrawRectangle(0, 0, 320, 240, PPGraphics::Get()->TransBackgroundDark);\r\n\r\n\t\t// draw dialog box\r\n\t\tPPGraphics::Get()->DrawRectangle(13, spaceY, 294, popupHeight + 4, mDialogOverride.isActivate ? mDialogOverride.TitleBgColor : PPGraphics::Get()->PrimaryColor, 6.0f);\r\n\t\tPPGraphics::Get()->DrawRectangle(15, spaceY + 2, 290, popupHeight, RGB(247, 247, 247), 6.0f);\r\n\r\n\t\t// draw title\r\n\t\tLabelBox(15, spaceY + 2, 290, 30, title, mDialogOverride.isActivate ? mDialogOverride.TitleBgColor : PPGraphics::Get()->PrimaryColor, PPGraphics::Get()->PrimaryTextColor, 0.8);\r\n\t\tLabelBoxAutoWrap(20, spaceY + 40, 280, bodySize.y, body, RGB(255, 255, 255), PPGraphics::Get()->PrimaryTextColor);\r\n\r\n\t\t// draw button close\r\n\t\tif (FlatColorButton(115, spaceY + 40 + bodySize.y + 4, 40, 30, \"Close\",\r\n\t\t\tPPGraphics::Get()->AccentColor, PPGraphics::Get()->AccentDarkColor, PPGraphics::Get()->AccentTextColor, 6.0f))\r\n\t\t{\r\n\t\t\treturn cancelCallback();\r\n\t\t}\r\n\r\n\t\t// draw button ok\r\n\t\tif (FlatColorButton(165, spaceY + 40 + bodySize.y + 4, 40, 30, \"OK\",\r\n\t\t\tPPGraphics::Get()->PrimaryColor, PPGraphics::Get()->PrimaryDarkColor, PPGraphics::Get()->PrimaryTextColor, 6.0f))\r\n\t\t{\r\n\t\t\treturn okCallback();\r\n\t\t}\r\n\t\treturn 0;\r\n\t});\r\n\treturn 0;\r\n}\r\n\r\nint PPUI::DrawStreamConfigUI(PPSessionManager* sessionManager, ResultCallback cancel, ResultCallback ok)\r\n{\r\n\tPPGraphics::Get()->DrawRectangle(0, 0, 320, 240, RGB(236, 240, 241));\r\n\tLabelBox(0, 0, 320, 30, \"Advance Config\", RGB(26, 188, 156), RGB(255, 255, 255));\r\n\r\n\r\n\t\r\n\t//ConfigManager::Get()->_cfg_video_quality = Slide(5, 40, 300, 30, ConfigManager::Get()->_cfg_video_quality, 10, 100, \"Quality\");\r\n\t//ConfigManager::Get()->_cfg_video_scale = Slide(5, 70, 300, 30, ConfigManager::Get()->_cfg_video_scale, 10, 100, \"Scale\");\r\n\t//ConfigManager::Get()->_cfg_skip_frame = Slide(5, 100, 300, 30, ConfigManager::Get()->_cfg_skip_frame, 0, 60, \"Skip Frame\");\r\n\t//ConfigManager::Get()->_cfg_wait_for_received = ToggleBox(5, 130, 300, 30, ConfigManager::Get()->_cfg_wait_for_received, \"Wait Received\");\r\n\r\n\r\n\t// Cancel button\r\n\tif (FlatColorButton(200, 200, 50, 30, \"Cancel\", RGB(192, 57, 43), RGB(231, 76, 60), RGB(255, 255, 255)))\r\n\t{\r\n\t\tClosePopup();\r\n\t\tcancel(nullptr, nullptr);\r\n\t}\r\n\r\n\t// OK button\r\n\tif (FlatColorButton(260, 200, 50, 30, \"OK\", RGB(41, 128, 185), RGB(52, 152, 219), RGB(255, 255, 255)))\r\n\t{\r\n\t\tClosePopup();\r\n\t\tok(nullptr, nullptr);\r\n\t}\r\n}\r\n\r\nint PPUI::DrawIdleBottomScreen(PPSessionManager* sessionManager)\r\n{\r\n\t// touch screen to wake up\r\n\tif(TouchUpOnArea(0,0, 320, 240))\r\n\t{\r\n\t\tsleepModeState = 1;\r\n\t}\r\n\t// label\r\n\tLabelBox(0, 0, 320, 240, \"Touch screen to wake up\", RGB(0, 0, 0), RGB(125, 125, 125));\r\n\r\n\tInfoBox(sessionManager);\r\n\r\n\treturn 0;\r\n}\r\n\r\nvoid PPUI::InfoBox(PPSessionManager* sessionManager)\r\n{\r\n\t// render video FPS\r\n\tchar videoFpsBuffer[100];\r\n\tsnprintf(videoFpsBuffer, sizeof videoFpsBuffer, \"FPS:%.1f|VPS:%.1f\", sessionManager->GetFPS(), sessionManager->GetVideoFPS());\r\n\tLabelBoxLeft(5, 220, 100, 20, videoFpsBuffer, TRANSPARENT, RGB(150, 150, 150), 0.4f);\r\n\tsnprintf(videoFpsBuffer, sizeof videoFpsBuffer, \"CPU:%.1f|GPU:%.1f|CMD:%.1f\", C3D_GetProcessingTime()*6.0f, C3D_GetDrawingTime()*6.0f, C3D_GetCmdBufUsage()*100.0f);\r\n\tLabelBoxLeft(5, 210, 100, 20, videoFpsBuffer, TRANSPARENT, RGB(150, 150, 150), 0.4f);\r\n}\r\n\r\nint PPUI::DrawTabs(const char* tabs[], u32 tabCount, int activeTab, float x, float y, float w, float h)\r\n{\r\n\tint ret = -1;\r\n\tfloat tabPadding = 10;\r\n\tfloat cX = x + tabPadding / 2;\r\n\tColor separetorActive = RGB(85, 239, 196); separetorActive.lighten(0.2f);\r\n\tColor separetor = RGB(178, 190, 195); separetor.lighten(0.2f);\r\n\t// Draw tab part\r\n\tfor(int i = 0; i < tabCount; ++i)\r\n\t{\r\n\t\tconst Vector2 tabTitleSize = PPGraphics::Get()->GetTextSize(tabs[i], 0.5f, 0.5f);\r\n\t\t// tab button\r\n\t\tif (FlatColorButton(cX, y + (activeTab == i ? 0 : tabPadding/2), tabTitleSize.x + tabPadding * 2, 25 + (activeTab == i ? tabPadding / 2 : 0), tabs[i],\r\n\t\t\tactiveTab == i ? RGB(30, 144, 255) : RGB(223, 228, 234),\r\n\t\t\tactiveTab == i ? RGB(83, 82, 237) : RGB(30, 144, 255),\r\n\t\t\tactiveTab == i ? RGB(241, 242, 246) : RGB(47, 53, 66)))\r\n\t\t{\r\n\t\t\tret = i;\r\n\t\t}\r\n\t\tcX += tabTitleSize.x + tabPadding * 2;\r\n\r\n\t\t// separetor line\r\n\t\tPPGraphics::Get()->DrawRectangle(cX - 1, y + (activeTab == i ? 0 : tabPadding / 2), 1, 25 + (activeTab == i ? tabPadding / 2 : 0), \r\n\t\t\tactiveTab == i ? separetorActive : separetor);\r\n\t}\r\n\r\n\r\n\t// Draw content background and scroll if need\r\n\tfloat cY = y + tabPadding / 2 + 25;\r\n\tPPGraphics::Get()->DrawRectangle(x, cY, w, h - (tabPadding / 2 + 25), RGB(223, 228, 234));\r\n\r\n\treturn ret;\r\n}\r\n\r\nint PPUI::DrawDialogBox(PPSessionManager* sessionManager)\r\n{\r\n\tif (mDialogBox != nullptr) {\r\n\t\tmTmpLockTouch = false;\r\n\t\tint ret = (*mDialogBox)();\r\n\t\tmTmpLockTouch = true;\r\n\t\tif(ret < 0)\r\n\t\t{\r\n\t\t\tmTmpLockTouch = false;\r\n\t\t\tdelete mDialogBox;\r\n\t\t\tmDialogBox = nullptr;\r\n\t\t\t// disable override after finish dialog\r\n\t\t\tmDialogOverride.isActivate = false;\r\n\t\t\tmDialogOverride.Title = nullptr;\r\n\t\t\tmDialogOverride.Body = nullptr;\r\n\t\t\t// trigger call later after finish dialog\r\n\t\t\tif(mDialogBoxCallLater != nullptr)\r\n\t\t\t{\r\n\t\t\t\t(*mDialogBoxCallLater)();\r\n\t\t\t\tdelete mDialogBoxCallLater;\r\n\t\t\t\tmDialogBoxCallLater = nullptr;\r\n\t\t\t}\r\n\t\t\tif (ret == RET_CLOSE_APP) return -1;\r\n\t\t}\r\n\t}\r\n\treturn 0;\r\n}\r\n\r\nVector2 PPUI::ScrollBox(float x, float y, float w, float h, Direction dir, Vector2 cursor, WHCallback contentDraw)\r\n{\r\n\tPPGraphics::Get()->StartMasked(x, y, w, h, GFX_BOTTOM);\r\n\r\n\t// return content size after drawing\r\n\tWH ret = contentDraw();\r\n\r\n\t// check for touch scrolling\r\n\t\r\n\tVector2 dif;\r\n\tif (TouchDownOnArea(x, y, w, h) && !mTmpLockTouch)\r\n\t{\r\n\t\tdif = Vector2{ (float)kTouch.px - mScrollLastTouch.x, (float)kTouch.py - mScrollLastTouch.y };\r\n\t\tif (mScrolling) {\r\n\t\t\t// calculate speed modifier\r\n\t\t\tif (abs(dif.x) >= 3 && (dir == D_HORIZONTAL || dir == D_BOTH))\r\n\t\t\t{\r\n\t\t\t\tmScrollSpeedModified += SCROLL_SPEED_STEP;\r\n\t\t\t}\r\n\t\t\tif (abs(dif.y) >= 3 && (dir == D_VERTICAL || dir == D_BOTH))\r\n\t\t\t{\r\n\t\t\t\tmScrollSpeedModified += SCROLL_SPEED_STEP;\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\t// check if not scrolling yet\r\n\t\tif(!mScrolling && mScrollLastTouch.x > -1.f)\r\n\t\t{\r\n\t\t\tif(abs(dif.x) >= SCROLL_THRESHOLD && (dir == D_HORIZONTAL || dir == D_BOTH))\r\n\t\t\t{\r\n\t\t\t\tmScrolling = true;\r\n\t\t\t}\r\n\t\t\tif (abs(dif.y) >= SCROLL_THRESHOLD && (dir == D_VERTICAL || dir == D_BOTH))\r\n\t\t\t{\r\n\t\t\t\tmScrolling = true;\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\tmScrollLastTouch = Vector2{ (float)kTouch.px ,(float)kTouch.py };\r\n\t}else\r\n\t{\r\n\t\tmScrollLastTouch = Vector2{ -1.0f, -1.0f };\r\n\t\tmScrolling = false;\r\n\t\tmScrollSpeedModified = SCROLL_SPEED_MODIFIED;\r\n\t}\r\n\r\n\t// check for scroll box\r\n\tswitch (dir)\r\n\t{\r\n\t\tcase D_NONE: \r\n\t\t{\r\n\t\t\t// Do not display scroll bar\r\n\t\t\tbreak;\r\n\t\t}\r\n\t\tcase D_HORIZONTAL: \r\n\t\t{\r\n\t\t\tVector2 maxScroll = { ret.width - w , 0 };\r\n\t\t\tfloat p2c = w / (float)ret.width;\r\n\t\t\tif (p2c < SCROLL_BAR_MIN) p2c = SCROLL_BAR_MIN;\r\n\t\t\t// only display scroll bar if percent is small than 1.0\r\n\t\t\tif (p2c < 1.0f)\r\n\t\t\t{\r\n\t\t\t\tfloat bW = floorf(p2c * w);\r\n\r\n\t\t\t\tif (mScrolling) {\r\n\t\t\t\t\tcursor.x += (float)dif.x + mScrollSpeedModified * (float)dif.x;\r\n\t\t\t\t\tif (cursor.x > maxScroll.x) cursor.x = maxScroll.x;\r\n\t\t\t\t\tif (cursor.x < 0) cursor.x = 0;\r\n\t\t\t\t\t//TODO: cursor are use for content drawing display\r\n\t\t\t\t}\r\n\r\n\t\t\t\tfloat moveableW = w - bW - 8; // 8 is padding\r\n\t\t\t\tfloat barMovePercent = cursor.x / maxScroll.x;\r\n\r\n\t\t\t\tPPGraphics::Get()->DrawRectangle(x + 4 + floorf(moveableW * barMovePercent), y + h - 10, bW, 6, RGBA(83, 92, 104, 150), 3.0f);\r\n\t\t\t}\r\n\t\t\tbreak;\r\n\t\t}\r\n\t\tcase D_VERTICAL: \r\n\t\t{\r\n\t\t\tVector2 maxScroll = { 0 , ret.height - h };\r\n\t\t\tfloat p2c = h / (float)ret.height;\r\n\t\t\tif (p2c < SCROLL_BAR_MIN) p2c = SCROLL_BAR_MIN;\r\n\t\t\t// only display scroll bar if percent is small than 1.0\r\n\t\t\tif (p2c < 1.0f)\r\n\t\t\t{\r\n\t\t\t\tfloat bH = floorf(p2c * h);\r\n\r\n\t\t\t\tif (mScrolling) {\r\n\t\t\t\t\tcursor.y = y + (float)dif.y +mScrollSpeedModified * (float)dif.y;\r\n\t\t\t\t\tif (cursor.y > maxScroll.y) cursor.y = maxScroll.y;\r\n\t\t\t\t\tif (cursor.y < 0) cursor.y = 0;\r\n\t\t\t\t}\r\n\r\n\t\t\t\tfloat moveableH = h - bH - 8; // 8 is padding\r\n\t\t\t\tfloat barMovePercent = cursor.y / maxScroll.y;\r\n\r\n\r\n\t\t\t\tPPGraphics::Get()->DrawRectangle(x + w - 10, y + 4 + floorf(moveableH * barMovePercent), 6, bH, RGBA(83, 92, 104, 150), 3.0f);\r\n\t\t\t}\r\n\t\t\tbreak;\r\n\t\t}\r\n\t\tcase D_BOTH: \r\n\t\t{\r\n\t\t\tbreak;\r\n\t\t}\r\n\t\tdefault: \r\n\t\t{\r\n\t\t\t\r\n\t\t};\r\n\t}\r\n\r\n\tPPGraphics::Get()->StopMasked();\r\n\treturn cursor;\r\n}\r\n\r\n///////////////////////////////////////////////////////////////////////////\r\n// SLIDE\r\n///////////////////////////////////////////////////////////////////////////\r\n\r\nfloat PPUI::Slide(float x, float y, float w, float h, float val, float min, float max, float step, const char* label)\r\n{\r\n\tVector2 tSize = PPGraphics::Get()->GetTextSize(label, 0.5f, 0.5f);\r\n\tfloat labelY = (h - tSize.y) / 2.0f;\r\n\tfloat labelX = x + 5.f;\r\n\tfloat slideX = w / 100.f * 35.f;\r\n\tfloat marginY = 2;\r\n\r\n\tif (val < min) val = min;\r\n\tif (val > max) val = max;\r\n\t// draw label\r\n\tPPGraphics::Get()->DrawText(label, x + labelX, y + labelY, 0.5f, 0.5f, RGB(26, 26, 26), false);\r\n\r\n\t// draw bg\r\n\tfloat startX = x + slideX;\r\n\tfloat startY = y + marginY;\r\n\tw = w - slideX;\r\n\th = h - 2 * marginY;\r\n\tPPGraphics::Get()->DrawRectangle(startX, startY, w, h, PPGraphics::Get()->PrimaryDarkColor);\r\n\t\r\n\tchar valBuffer[50];\r\n\tsnprintf(valBuffer, sizeof valBuffer, \"%.1f\", val, val);\r\n\tVector2 valSize = PPGraphics::Get()->GetTextSize(valBuffer, 0.5f, 0.5f);\r\n\tfloat valueX = (w - valSize.x) / 2.0f;\r\n\tfloat valueY = (h - valSize.y) / 2.0f;\r\n\r\n\t// draw value\r\n\tPPGraphics::Get()->DrawText(valBuffer, startX + valueX, startY + valueY, 0.5f, 0.5f, PPGraphics::Get()->AccentTextColor, false);\r\n\tfloat newValue = val;\r\n\t// draw plus and minus button\r\n\tif(RepeatButton(startX + 1, startY + 1, 30 - 2, h - 2, \"<\", RGB(236, 240, 241), RGB(189, 195, 199), RGB(44, 62, 80)))\r\n\t{\r\n\t\t// minus\r\n\t\tnewValue -= step;\r\n\t}\r\n\r\n\tif(RepeatButton(startX + w - 30, startY + 1, 30 - 1, h - 2, \">\", RGB(236, 240, 241), RGB(189, 195, 199), RGB(44, 62, 80)))\r\n\t{\r\n\t\t// plus\r\n\t\tnewValue += step;\r\n\t}\r\n\tif (newValue < min) newValue = min;\r\n\tif (newValue > max) newValue = max;\r\n\treturn newValue;\r\n}\r\n\r\n///////////////////////////////////////////////////////////////////////////\r\n// CHECKBOX\r\n///////////////////////////////////////////////////////////////////////////\r\nbool PPUI::ToggleBox(float x, float y, float w, float h, bool value, const char* label)\r\n{\r\n\tVector2 tSize = PPGraphics::Get()->GetTextSize(label, 0.5f, 0.5f);\r\n\tfloat labelY = (h - tSize.y) / 2.0f;\r\n\tfloat labelX = x + 5.f;\r\n\tfloat boxSize = (w / 100.f * 35.f);\r\n\tfloat marginX = w - boxSize;\r\n\tfloat marginY = 2;\r\n\r\n\t// draw label\r\n\tPPGraphics::Get()->DrawText(label, x + labelX, y + labelY, 0.5f, 0.5f, RGB(26, 26, 26), false);\r\n\r\n\t// draw bg\r\n\tfloat startX = x + marginX;\r\n\tfloat startY = y + marginY;\r\n\tw = w - marginX;\r\n\th = h - 2 * marginY;\r\n\tPPGraphics::Get()->DrawRectangle(startX, startY, w, h, PPGraphics::Get()->PrimaryDarkColor);\r\n\r\n\tbool result = value;\r\n\tif(value)\r\n\t{\r\n\t\t// on button\r\n\t\tFlatColorButton(startX + 1, startY + 1, (boxSize / 2) - 2, h - 2, \"On\", RGB(236, 240, 241), RGB(189, 195, 199), RGB(44, 62, 80));\r\n\t\t// off button\r\n\t\tif (FlatColorButton(startX + 1 + (boxSize / 2), startY + 1, (boxSize / 2) - 2, h - 2, \"Off\", PPGraphics::Get()->PrimaryDarkColor, PPGraphics::Get()->PrimaryColor, RGB(236, 240, 241)))\r\n\t\t{\r\n\t\t\tif(!mTmpLockTouch) result = false;\r\n\t\t}\r\n\t}else\r\n\t{\r\n\t\t// on button\r\n\t\tif (FlatColorButton(startX + 1, startY + 1, (boxSize / 2) - 2, h - 2, \"On\", PPGraphics::Get()->PrimaryDarkColor, PPGraphics::Get()->PrimaryColor, RGB(236, 240, 241)))\r\n\t\t{\r\n\t\t\tif (!mTmpLockTouch) result = true;\r\n\t\t}\r\n\t\t// off button\r\n\t\tFlatColorButton(startX + 1 + (boxSize / 2), startY + 1, (boxSize / 2) - 2, h - 2, \"Off\", RGB(236, 240, 241), RGB(189, 195, 199), RGB(44, 62, 80));\r\n\t}\r\n\treturn result;\r\n}\r\n\r\nbool PPUI::SelectBox(float x, float y, float w, float h, Color color, float rounding)\r\n{\r\n\tPPGraphics::Get()->DrawRectangle(x, y, w, h, color, rounding);\r\n\treturn TouchUpOnArea(x, y, w, h) && !mTmpLockTouch;\r\n}\r\n\r\n///////////////////////////////////////////////////////////////////////////\r\n// BUTTON\r\n///////////////////////////////////////////////////////////////////////////\r\n\r\nbool PPUI::FlatButton(float x, float y, float w, float h, const char* label, float rounding)\r\n{\r\n\treturn FlatColorButton(x, y, w, h, label, RGB(26, 188, 156), RGB(46, 204, 113), RGB(236, 240, 241), rounding);\r\n}\r\n\r\nbool PPUI::FlatDarkButton(float x, float y, float w, float h, const char* label, float rounding)\r\n{\r\n\treturn FlatColorButton(x, y, w, h, label, RGB(22, 160, 133), RGB(39, 174, 96), RGB(236, 240, 241), rounding);\r\n}\r\n\r\nbool PPUI::FlatColorButton(float x, float y, float w, float h, const char* label, Color colNormal, Color colActive, Color txtCol, float rounding)\r\n{\r\n\tfloat tScale = 0.5f;\r\n\tif (TouchDownOnArea(x, y, w, h) && !mTmpLockTouch)\r\n\t{\r\n\t\tPPGraphics::Get()->DrawRectangle(x, y, w, h, colActive, rounding);\r\n\t\ttScale = 0.6f;\r\n\t}\r\n\telse\r\n\t{\r\n\t\tPPGraphics::Get()->DrawRectangle(x, y, w, h, colNormal, rounding);\r\n\t}\r\n\tVector2 tSize = PPGraphics::Get()->GetTextSize(label, tScale, tScale);\r\n\tPPGraphics::Get()->DrawText(label, x + (w - tSize.x) / 2.0f, y + (h - tSize.y) / 2.0f, tScale, tScale, txtCol, false);\r\n\treturn TouchUpOnArea(x, y, w, h) && !mTmpLockTouch;\r\n}\r\n\r\nbool PPUI::RepeatButton(float x, float y, float w, float h, const char* label, Color colNormal, Color colActive, Color txtCol)\r\n{\r\n\tbool isTouchDown = TouchDownOnArea(x, y, w, h);\r\n\tfloat tScale = 0.5f;\r\n\tu64 difTime = 0;\r\n\tif (isTouchDown && !mTmpLockTouch)\r\n\t{\r\n\t\tPPGraphics::Get()->DrawRectangle(x, y, w, h, colActive);\r\n\t\ttScale = 0.6f;\r\n\t\t\r\n\t\tif (holdTime == 0)\r\n\t\t{\r\n\t\t\tholdTime = osGetTime();\r\n\t\t}else\r\n\t\t{\r\n\t\t\tdifTime = osGetTime() - holdTime;\r\n\t\t}\r\n\t}\r\n\telse\r\n\t{\r\n\t\tPPGraphics::Get()->DrawRectangle(x, y, w, h, colNormal);\r\n\t}\r\n\tVector2 tSize = PPGraphics::Get()->GetTextSize(label, tScale, tScale);\r\n\tfloat startX = (w - tSize.x) / 2.0f;\r\n\tfloat startY = (h - tSize.y) / 2.0f;\r\n\tPPGraphics::Get()->DrawText(label, x + startX, y + startY, tScale, tScale, txtCol, false);\r\n\r\n\t\r\n\treturn (isTouchDown && difTime > 500 || TouchUpOnArea(x, y, w, h)) && !mTmpLockTouch;\r\n}\r\n\r\n///////////////////////////////////////////////////////////////////////////\r\n// TEXT\r\n///////////////////////////////////////////////////////////////////////////\r\n\r\n/**\r\n * \\brief Draw label box\r\n * \\param x \r\n * \\param y \r\n * \\param w \r\n * \\param h \r\n * \\param defaultValue \r\n * \\param placeHolder \r\n */\r\nint PPUI::LabelBox(float x, float y, float w, float h, const char* label, Color bgColor, Color txtColor, float scale, float rounding)\r\n{\r\n\tPPGraphics::Get()->DrawRectangle(x, y, w, h, bgColor, rounding);\r\n\tVector2 tSize = PPGraphics::Get()->GetTextSize(label, scale, scale);\r\n\tfloat startX = (w - tSize.x) / 2.0f;\r\n\tfloat startY = (h - tSize.y) / 2.0f;\r\n\tPPGraphics::Get()->DrawText(label, x + startX, y + startY, scale, scale, txtColor, false);\r\n\r\n\r\n#ifdef UI_DEBUG\r\n\tchar buffer[100];\r\n\tsnprintf(buffer, sizeof buffer, \"w:%.02f|h:%.02f\", tSize.x, tSize.y);\r\n\tPPGraphics::Get()->DrawText(buffer, x , y - 10, 0.8f * scale, 0.8f * scale, txtColor, false);\r\n#endif\r\n\r\n\treturn TouchUpOnArea(x, y, w, h) && !mTmpLockTouch;\r\n}\r\n\r\nint PPUI::LabelBoxAutoWrap(float x, float y, float w, float h, const char* label, Color bgColor, Color txtColor,\r\n\tfloat scale, float rounding)\r\n{\r\n\tPPGraphics::Get()->DrawRectangle(x, y, w, h, bgColor, rounding);\r\n\tVector3 tSize = PPGraphics::Get()->GetTextSizeAutoWrap(label, scale, scale, w);\r\n\tfloat startX = (w - tSize.x) / 2.0f;\r\n\tfloat startY = (h - tSize.y) / 2.0f;\r\n\tPPGraphics::Get()->DrawTextAutoWrap(label, x + startX, y + startY, w, scale, scale, txtColor, false);\r\n\r\n#ifdef UI_DEBUG\r\n\tchar buffer[100];\r\n\tsnprintf(buffer, sizeof buffer, \"w:%.02f|h:%.02f|l:%d\", tSize.x, tSize.y, tSize.z);\r\n\tPPGraphics::Get()->DrawText(buffer, x, y - 10, 0.8f * scale, 0.8f * scale, txtColor, false);\r\n#endif\r\n\r\n\treturn TouchUpOnArea(x, y, w, h) && !mTmpLockTouch;\r\n}\r\n\r\nint PPUI::LabelBoxLeft(float x, float y, float w, float h, const char* label, Color bgColor, Color txtColor, float scale, float rounding)\r\n{\r\n\tPPGraphics::Get()->DrawRectangle(x, y, w, h, bgColor, rounding);\r\n\tVector2 tSize = PPGraphics::Get()->GetTextSize(label, scale, scale);\r\n\tfloat startY = (h - tSize.y) / 2.0f;\r\n\tPPGraphics::Get()->DrawText(label, x, y + startY, scale, scale, txtColor, false);\r\n\r\n#ifdef UI_DEBUG\r\n\tchar buffer[100];\r\n\tsnprintf(buffer, sizeof buffer, \"w:%.02f|h:%.02f\", tSize.x, tSize.y);\r\n\tPPGraphics::Get()->DrawText(buffer, x, y - 10, 0.8f * scale, 0.8f * scale, txtColor, false);\r\n#endif\r\n\r\n\treturn TouchUpOnArea(x, y, w, h) && !mTmpLockTouch;\r\n}\r\n\r\n\r\n///////////////////////////////////////////////////////////////////////////\r\n// POPUP\r\n///////////////////////////////////////////////////////////////////////////\r\n\r\nbool PPUI::HasPopup()\r\n{\r\n\treturn mPopupList.size() > 0;\r\n}\r\n\r\nPopupCallback PPUI::GetPopup()\r\n{\r\n\treturn mPopupList[mPopupList.size() - 1];\r\n}\r\n\r\nvoid PPUI::ClosePopup()\r\n{\r\n\tmPopupList.erase(mPopupList.end() - 1);\r\n}\r\n\r\nvoid PPUI::AddPopup(PopupCallback callback)\r\n{\r\n\tmPopupList.push_back(callback);\r\n}"
  },
  {
    "path": "PinBox/PinBox/source/easing.cpp",
    "content": "#include <cmath>\n#include <map>\n\n#include \"easing.h\"\n\n#ifndef PI\n#define PI 3.1415926545\n#endif\n\ndouble easeInSine( double t ) {\n\treturn sin( 1.5707963 * t );\n}\n\ndouble easeOutSine( double t ) {\n\treturn 1 + sin( 1.5707963 * (--t) );\n}\n\ndouble easeInOutSine( double t ) {\n\treturn 0.5 * (1 + sin( 3.1415926 * (t - 0.5) ) );\n}\n\ndouble easeInQuad( double t ) {\n    return t * t;\n}\n\ndouble easeOutQuad( double t ) { \n    return t * (2 - t);\n}\n\ndouble easeInOutQuad( double t ) {\n    return t < 0.5 ? 2 * t * t : t * (4 - 2 * t) - 1;\n}\n\ndouble easeInCubic( double t ) {\n    return t * t * t;\n}\n\ndouble easeOutCubic( double t ) {\n    return 1 + (--t) * t * t;\n}\n\ndouble easeInOutCubic( double t ) {\n    return t < 0.5 ? 4 * t * t * t : 1 + (--t) * (2 * (--t)) * (2 * t);\n}\n\ndouble easeInQuart( double t ) {\n    t *= t;\n    return t * t;\n}\n\ndouble easeOutQuart( double t ) {\n    t = (--t) * t;\n    return 1 - t * t;\n}\n\ndouble easeInOutQuart( double t ) {\n    if( t < 0.5 ) {\n        t *= t;\n        return 8 * t * t;\n    } else {\n        t = (--t) * t;\n        return 1 - 8 * t * t;\n    }\n}\n\ndouble easeInQuint( double t ) {\n    double t2 = t * t;\n    return t * t2 * t2;\n}\n\ndouble easeOutQuint( double t ) {\n    double t2 = (--t) * t;\n    return 1 + t * t2 * t2;\n}\n\ndouble easeInOutQuint( double t ) {\n    double t2;\n    if( t < 0.5 ) {\n        t2 = t * t;\n        return 16 * t * t2 * t2;\n    } else {\n        t2 = (--t) * t;\n        return 1 + 16 * t * t2 * t2;\n    }\n}\n\ndouble easeInExpo( double t ) {\n    return (pow( 2, 8 * t ) - 1) / 255;\n}\n\ndouble easeOutExpo( double t ) {\n    return 1 - pow( 2, -8 * t );\n}\n\ndouble easeInOutExpo( double t ) {\n    if( t < 0.5 ) {\n        return (pow( 2, 16 * t ) - 1) / 510;\n    } else {\n        return 1 - 0.5 * pow( 2, -16 * (t - 0.5) );\n    }\n}\n\ndouble easeInCirc( double t ) {\n    return 1 - sqrt( 1 - t );\n}\n\ndouble easeOutCirc( double t ) {\n    return sqrt( t );\n}\n\ndouble easeInOutCirc( double t ) {\n    if( t < 0.5 ) {\n        return (1 - sqrt( 1 - 2 * t )) * 0.5;\n    } else {\n        return (1 + sqrt( 2 * t - 1 )) * 0.5;\n    }\n}\n\ndouble easeInBack( double t ) {\n    return t * t * (2.70158 * t - 1.70158);\n}\n\ndouble easeOutBack( double t ) {\n    return 1 + (--t) * t * (2.70158 * t + 1.70158);\n}\n\ndouble easeInOutBack( double t ) {\n    if( t < 0.5 ) {\n        return t * t * (7 * t - 2.5) * 2;\n    } else {\n        return 1 + (--t) * t * 2 * (7 * t + 2.5);\n    }\n}\n\ndouble easeInElastic( double t ) {\n    double t2 = t * t;\n    return t2 * t2 * sin( t * PI * 4.5 );\n}\n\ndouble easeOutElastic( double t ) {\n    double t2 = (t - 1) * (t - 1);\n    return 1 - t2 * t2 * cos( t * PI * 4.5 );\n}\n\ndouble easeInOutElastic( double t ) {\n    double t2;\n    if( t < 0.45 ) {\n        t2 = t * t;\n        return 8 * t2 * t2 * sin( t * PI * 9 );\n    } else if( t < 0.55 ) {\n        return 0.5 + 0.75 * sin( t * PI * 4 );\n    } else {\n        t2 = (t - 1) * (t - 1);\n        return 1 - 8 * t2 * t2 * sin( t * PI * 9 );\n    }\n}\n\ndouble easeInBounce( double t ) {\n    return pow( 2, 6 * (t - 1) ) * abs( sin( t * PI * 3.5 ) );\n}\n\ndouble easeOutBounce( double t ) {\n    return 1 - pow( 2, -6 * t ) * abs( cos( t * PI * 3.5 ) );\n}\n\ndouble easeInOutBounce( double t ) {\n    if( t < 0.5 ) {\n        return 8 * pow( 2, 8 * (t - 1) ) * abs( sin( t * PI * 7 ) );\n    } else {\n        return 1 - 8 * pow( 2, -8 * t ) * abs( sin( t * PI * 7 ) );\n    }\n}\n\neasingFunction getEasingFunction( easing_functions function )\n{\n\tstatic std::map< easing_functions, easingFunction > easingFunctions;\n\tif( easingFunctions.empty() )\n\t{\n\t\teasingFunctions.insert( std::make_pair( EaseInSine, \teaseInSine ) );\n\t\teasingFunctions.insert( std::make_pair( EaseOutSine, \teaseOutSine ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInOutSine, \teaseInOutSine ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInQuad, \teaseInQuad ) );\n\t\teasingFunctions.insert( std::make_pair( EaseOutQuad, \teaseOutQuad ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInOutQuad, \teaseInOutQuad ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInCubic, \teaseInCubic ) );\n\t\teasingFunctions.insert( std::make_pair( EaseOutCubic, \teaseOutCubic ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInOutCubic, easeInOutCubic ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInQuart, \teaseInQuart ) );\n\t\teasingFunctions.insert( std::make_pair( EaseOutQuart, \teaseOutQuart ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInOutQuart, easeInOutQuart) );\n\t\teasingFunctions.insert( std::make_pair( EaseInQuint, \teaseInQuint ) );\n\t\teasingFunctions.insert( std::make_pair( EaseOutQuint, \teaseOutQuint ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInOutQuint, easeInOutQuint ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInExpo, \teaseInExpo ) );\n\t\teasingFunctions.insert( std::make_pair( EaseOutExpo, \teaseOutExpo ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInOutExpo,\teaseInOutExpo ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInCirc, \teaseInCirc ) );\n\t\teasingFunctions.insert( std::make_pair( EaseOutCirc, \teaseOutCirc ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInOutCirc,\teaseInOutCirc ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInBack, \teaseInBack ) );\n\t\teasingFunctions.insert( std::make_pair( EaseOutBack, \teaseOutBack ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInOutBack,\teaseInOutBack ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInElastic, \teaseInElastic ) );\n\t\teasingFunctions.insert( std::make_pair( EaseOutElastic, easeOutElastic ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInOutElastic, easeInOutElastic ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInBounce, \teaseInBounce ) );\n\t\teasingFunctions.insert( std::make_pair( EaseOutBounce, \teaseOutBounce ) );\n\t\teasingFunctions.insert( std::make_pair( EaseInOutBounce, easeInOutBounce ) );\n\n\t}\n\n\tauto it = easingFunctions.find( function );\n\treturn it == easingFunctions.end() ? nullptr : it->second;\n}\n"
  },
  {
    "path": "PinBox/PinBox/source/lodepng.cpp",
    "content": "/*\nLodePNG version 20180819\n\nCopyright (c) 2005-2018 Lode Vandevenne\n\nThis software is provided 'as-is', without any express or implied\nwarranty. In no event will the authors be held liable for any damages\narising from the use of this software.\n\nPermission is granted to anyone to use this software for any purpose,\nincluding commercial applications, and to alter it and redistribute it\nfreely, subject to the following restrictions:\n\n1. The origin of this software must not be misrepresented; you must not\nclaim that you wrote the original software. If you use this software\nin a product, an acknowledgment in the product documentation would be\nappreciated but is not required.\n\n2. Altered source versions must be plainly marked as such, and must not be\nmisrepresented as being the original software.\n\n3. This notice may not be removed or altered from any source\ndistribution.\n*/\n\n/*\nThe manual and changelog are in the header file \"lodepng.h\"\nRename this file to lodepng.cpp to use it for C++, or to lodepng.c to use it for C.\n*/\n\n#include \"lodepng.h\"\n\n#include <limits.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n#if defined(_MSC_VER) && (_MSC_VER >= 1310) /*Visual Studio: A few warning types are not desired here.*/\n#pragma warning( disable : 4244 ) /*implicit conversions: not warned by gcc -Wall -Wextra and requires too much casts*/\n#pragma warning( disable : 4996 ) /*VS does not like fopen, but fopen_s is not standard C so unusable here*/\n#endif /*_MSC_VER */\n\nconst char* LODEPNG_VERSION_STRING = \"20180819\";\n\n/*\nThis source file is built up in the following large parts. The code sections\nwith the \"LODEPNG_COMPILE_\" #defines divide this up further in an intermixed way.\n-Tools for C and common code for PNG and Zlib\n-C Code for Zlib (huffman, deflate, ...)\n-C Code for PNG (file format chunks, adam7, PNG filters, color conversions, ...)\n-The C++ wrapper around all of the above\n*/\n\n/*The malloc, realloc and free functions defined here with \"lodepng_\" in front\nof the name, so that you can easily change them to others related to your\nplatform if needed. Everything else in the code calls these. Pass\n-DLODEPNG_NO_COMPILE_ALLOCATORS to the compiler, or comment out\n#define LODEPNG_COMPILE_ALLOCATORS in the header, to disable the ones here and\ndefine them in your own project's source files without needing to change\nlodepng source code. Don't forget to remove \"static\" if you copypaste them\nfrom here.*/\n\n#ifdef LODEPNG_COMPILE_ALLOCATORS\nstatic void* lodepng_malloc(size_t size)\n{\n#ifdef LODEPNG_MAX_ALLOC\n\tif (size > LODEPNG_MAX_ALLOC) return 0;\n#endif\n\treturn malloc(size);\n}\n\nstatic void* lodepng_realloc(void* ptr, size_t new_size)\n{\n#ifdef LODEPNG_MAX_ALLOC\n\tif (new_size > LODEPNG_MAX_ALLOC) return 0;\n#endif\n\treturn realloc(ptr, new_size);\n}\n\nstatic void lodepng_free(void* ptr)\n{\n\tfree(ptr);\n}\n#else /*LODEPNG_COMPILE_ALLOCATORS*/\nvoid* lodepng_malloc(size_t size);\nvoid* lodepng_realloc(void* ptr, size_t new_size);\nvoid lodepng_free(void* ptr);\n#endif /*LODEPNG_COMPILE_ALLOCATORS*/\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* ////////////////////////////////////////////////////////////////////////// */\n/* // Tools for C, and common code for PNG and Zlib.                       // */\n/* ////////////////////////////////////////////////////////////////////////// */\n/* ////////////////////////////////////////////////////////////////////////// */\n\n#define LODEPNG_MAX(a, b) (((a) > (b)) ? (a) : (b))\n#define LODEPNG_MIN(a, b) (((a) < (b)) ? (a) : (b))\n\n/*\nOften in case of an error a value is assigned to a variable and then it breaks\nout of a loop (to go to the cleanup phase of a function). This macro does that.\nIt makes the error handling code shorter and more readable.\n\nExample: if(!uivector_resizev(&frequencies_ll, 286, 0)) ERROR_BREAK(83);\n*/\n#define CERROR_BREAK(errorvar, code)\\\n{\\\n  errorvar = code;\\\n  break;\\\n}\n\n/*version of CERROR_BREAK that assumes the common case where the error variable is named \"error\"*/\n#define ERROR_BREAK(code) CERROR_BREAK(error, code)\n\n/*Set error var to the error code, and return it.*/\n#define CERROR_RETURN_ERROR(errorvar, code)\\\n{\\\n  errorvar = code;\\\n  return code;\\\n}\n\n/*Try the code, if it returns error, also return the error.*/\n#define CERROR_TRY_RETURN(call)\\\n{\\\n  unsigned error = call;\\\n  if(error) return error;\\\n}\n\n/*Set error var to the error code, and return from the void function.*/\n#define CERROR_RETURN(errorvar, code)\\\n{\\\n  errorvar = code;\\\n  return;\\\n}\n\n/*\nAbout uivector, ucvector and string:\n-All of them wrap dynamic arrays or text strings in a similar way.\n-LodePNG was originally written in C++. The vectors replace the std::vectors that were used in the C++ version.\n-The string tools are made to avoid problems with compilers that declare things like strncat as deprecated.\n-They're not used in the interface, only internally in this file as static functions.\n-As with many other structs in this file, the init and cleanup functions serve as ctor and dtor.\n*/\n\n#ifdef LODEPNG_COMPILE_ZLIB\n/*dynamic vector of unsigned ints*/\ntypedef struct uivector\n{\n\tunsigned* data;\n\tsize_t size; /*size in number of unsigned longs*/\n\tsize_t allocsize; /*allocated size in bytes*/\n} uivector;\n\nstatic void uivector_cleanup(void* p)\n{\n\t((uivector*)p)->size = ((uivector*)p)->allocsize = 0;\n\tlodepng_free(((uivector*)p)->data);\n\t((uivector*)p)->data = NULL;\n}\n\n/*returns 1 if success, 0 if failure ==> nothing done*/\nstatic unsigned uivector_reserve(uivector* p, size_t allocsize)\n{\n\tif (allocsize > p->allocsize)\n\t{\n\t\tsize_t newsize = (allocsize > p->allocsize * 2) ? allocsize : (allocsize * 3 / 2);\n\t\tvoid* data = lodepng_realloc(p->data, newsize);\n\t\tif (data)\n\t\t{\n\t\t\tp->allocsize = newsize;\n\t\t\tp->data = (unsigned*)data;\n\t\t}\n\t\telse return 0; /*error: not enough memory*/\n\t}\n\treturn 1;\n}\n\n/*returns 1 if success, 0 if failure ==> nothing done*/\nstatic unsigned uivector_resize(uivector* p, size_t size)\n{\n\tif (!uivector_reserve(p, size * sizeof(unsigned))) return 0;\n\tp->size = size;\n\treturn 1; /*success*/\n}\n\n/*resize and give all new elements the value*/\nstatic unsigned uivector_resizev(uivector* p, size_t size, unsigned value)\n{\n\tsize_t oldsize = p->size, i;\n\tif (!uivector_resize(p, size)) return 0;\n\tfor (i = oldsize; i < size; ++i) p->data[i] = value;\n\treturn 1;\n}\n\nstatic void uivector_init(uivector* p)\n{\n\tp->data = NULL;\n\tp->size = p->allocsize = 0;\n}\n\n#ifdef LODEPNG_COMPILE_ENCODER\n/*returns 1 if success, 0 if failure ==> nothing done*/\nstatic unsigned uivector_push_back(uivector* p, unsigned c)\n{\n\tif (!uivector_resize(p, p->size + 1)) return 0;\n\tp->data[p->size - 1] = c;\n\treturn 1;\n}\n#endif /*LODEPNG_COMPILE_ENCODER*/\n#endif /*LODEPNG_COMPILE_ZLIB*/\n\n/* /////////////////////////////////////////////////////////////////////////// */\n\n/*dynamic vector of unsigned chars*/\ntypedef struct ucvector\n{\n\tunsigned char* data;\n\tsize_t size; /*used size*/\n\tsize_t allocsize; /*allocated size*/\n} ucvector;\n\n/*returns 1 if success, 0 if failure ==> nothing done*/\nstatic unsigned ucvector_reserve(ucvector* p, size_t allocsize)\n{\n\tif (allocsize > p->allocsize)\n\t{\n\t\tsize_t newsize = (allocsize > p->allocsize * 2) ? allocsize : (allocsize * 3 / 2);\n\t\tvoid* data = lodepng_realloc(p->data, newsize);\n\t\tif (data)\n\t\t{\n\t\t\tp->allocsize = newsize;\n\t\t\tp->data = (unsigned char*)data;\n\t\t}\n\t\telse return 0; /*error: not enough memory*/\n\t}\n\treturn 1;\n}\n\n/*returns 1 if success, 0 if failure ==> nothing done*/\nstatic unsigned ucvector_resize(ucvector* p, size_t size)\n{\n\tif (!ucvector_reserve(p, size * sizeof(unsigned char))) return 0;\n\tp->size = size;\n\treturn 1; /*success*/\n}\n\n#ifdef LODEPNG_COMPILE_PNG\n\nstatic void ucvector_cleanup(void* p)\n{\n\t((ucvector*)p)->size = ((ucvector*)p)->allocsize = 0;\n\tlodepng_free(((ucvector*)p)->data);\n\t((ucvector*)p)->data = NULL;\n}\n\nstatic void ucvector_init(ucvector* p)\n{\n\tp->data = NULL;\n\tp->size = p->allocsize = 0;\n}\n#endif /*LODEPNG_COMPILE_PNG*/\n\n#ifdef LODEPNG_COMPILE_ZLIB\n/*you can both convert from vector to buffer&size and vica versa. If you use\ninit_buffer to take over a buffer and size, it is not needed to use cleanup*/\nstatic void ucvector_init_buffer(ucvector* p, unsigned char* buffer, size_t size)\n{\n\tp->data = buffer;\n\tp->allocsize = p->size = size;\n}\n#endif /*LODEPNG_COMPILE_ZLIB*/\n\n#if (defined(LODEPNG_COMPILE_PNG) && defined(LODEPNG_COMPILE_ANCILLARY_CHUNKS)) || defined(LODEPNG_COMPILE_ENCODER)\n/*returns 1 if success, 0 if failure ==> nothing done*/\nstatic unsigned ucvector_push_back(ucvector* p, unsigned char c)\n{\n\tif (!ucvector_resize(p, p->size + 1)) return 0;\n\tp->data[p->size - 1] = c;\n\treturn 1;\n}\n#endif /*defined(LODEPNG_COMPILE_PNG) || defined(LODEPNG_COMPILE_ENCODER)*/\n\n\n/* ////////////////////////////////////////////////////////////////////////// */\n\n#ifdef LODEPNG_COMPILE_PNG\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\n/*free string pointer and set it to NULL*/\nstatic void string_cleanup(char** out)\n{\n\tlodepng_free(*out);\n\t*out = NULL;\n}\n\n/* dynamically allocates a new string with a copy of the null terminated input text */\nstatic char* alloc_string(const char* in)\n{\n\tsize_t insize = strlen(in);\n\tchar* out = (char*)lodepng_malloc(insize + 1);\n\tif (out)\n\t{\n\t\tsize_t i;\n\t\tfor (i = 0; i != insize; ++i)\n\t\t{\n\t\t\tout[i] = in[i];\n\t\t}\n\t\tout[i] = 0;\n\t}\n\treturn out;\n}\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n#endif /*LODEPNG_COMPILE_PNG*/\n\n/* ////////////////////////////////////////////////////////////////////////// */\n\nunsigned lodepng_read32bitInt(const unsigned char* buffer)\n{\n\treturn (unsigned)((buffer[0] << 24) | (buffer[1] << 16) | (buffer[2] << 8) | buffer[3]);\n}\n\n#if defined(LODEPNG_COMPILE_PNG) || defined(LODEPNG_COMPILE_ENCODER)\n/*buffer must have at least 4 allocated bytes available*/\nstatic void lodepng_set32bitInt(unsigned char* buffer, unsigned value)\n{\n\tbuffer[0] = (unsigned char)((value >> 24) & 0xff);\n\tbuffer[1] = (unsigned char)((value >> 16) & 0xff);\n\tbuffer[2] = (unsigned char)((value >> 8) & 0xff);\n\tbuffer[3] = (unsigned char)((value) & 0xff);\n}\n#endif /*defined(LODEPNG_COMPILE_PNG) || defined(LODEPNG_COMPILE_ENCODER)*/\n\n#ifdef LODEPNG_COMPILE_ENCODER\nstatic void lodepng_add32bitInt(ucvector* buffer, unsigned value)\n{\n\tucvector_resize(buffer, buffer->size + 4); /*todo: give error if resize failed*/\n\tlodepng_set32bitInt(&buffer->data[buffer->size - 4], value);\n}\n#endif /*LODEPNG_COMPILE_ENCODER*/\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* / File IO                                                                / */\n/* ////////////////////////////////////////////////////////////////////////// */\n\n#ifdef LODEPNG_COMPILE_DISK\n\n/* returns negative value on error. This should be pure C compatible, so no fstat. */\nstatic long lodepng_filesize(const char* filename)\n{\n\tFILE* file;\n\tlong size;\n\tfile = fopen(filename, \"rb\");\n\tif (!file) return -1;\n\n\tif (fseek(file, 0, SEEK_END) != 0)\n\t{\n\t\tfclose(file);\n\t\treturn -1;\n\t}\n\n\tsize = ftell(file);\n\t/* It may give LONG_MAX as directory size, this is invalid for us. */\n\tif (size == LONG_MAX) size = -1;\n\n\tfclose(file);\n\treturn size;\n}\n\n/* load file into buffer that already has the correct allocated size. Returns error code.*/\nstatic unsigned lodepng_buffer_file(unsigned char* out, size_t size, const char* filename)\n{\n\tFILE* file;\n\tsize_t readsize;\n\tfile = fopen(filename, \"rb\");\n\tif (!file) return 78;\n\n\treadsize = fread(out, 1, size, file);\n\tfclose(file);\n\n\tif (readsize != size) return 78;\n\treturn 0;\n}\n\nunsigned lodepng_load_file(unsigned char** out, size_t* outsize, const char* filename)\n{\n\tlong size = lodepng_filesize(filename);\n\tif (size < 0) return 78;\n\t*outsize = (size_t)size;\n\n\t*out = (unsigned char*)lodepng_malloc((size_t)size);\n\tif (!(*out) && size > 0) return 83; /*the above malloc failed*/\n\n\treturn lodepng_buffer_file(*out, (size_t)size, filename);\n}\n\n/*write given buffer to the file, overwriting the file, it doesn't append to it.*/\nunsigned lodepng_save_file(const unsigned char* buffer, size_t buffersize, const char* filename)\n{\n\tFILE* file;\n\tfile = fopen(filename, \"wb\");\n\tif (!file) return 79;\n\tfwrite((char*)buffer, 1, buffersize, file);\n\tfclose(file);\n\treturn 0;\n}\n\n#endif /*LODEPNG_COMPILE_DISK*/\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* ////////////////////////////////////////////////////////////////////////// */\n/* // End of common code and tools. Begin of Zlib related code.            // */\n/* ////////////////////////////////////////////////////////////////////////// */\n/* ////////////////////////////////////////////////////////////////////////// */\n\n#ifdef LODEPNG_COMPILE_ZLIB\n#ifdef LODEPNG_COMPILE_ENCODER\n/*TODO: this ignores potential out of memory errors*/\n#define addBitToStream(/*size_t**/ bitpointer, /*ucvector**/ bitstream, /*unsigned char*/ bit)\\\n{\\\n  /*add a new byte at the end*/\\\n  if(((*bitpointer) & 7) == 0) ucvector_push_back(bitstream, (unsigned char)0);\\\n  /*earlier bit of huffman code is in a lesser significant bit of an earlier byte*/\\\n  (bitstream->data[bitstream->size - 1]) |= (bit << ((*bitpointer) & 0x7));\\\n  ++(*bitpointer);\\\n}\n\nstatic void addBitsToStream(size_t* bitpointer, ucvector* bitstream, unsigned value, size_t nbits)\n{\n\tsize_t i;\n\tfor (i = 0; i != nbits; ++i) addBitToStream(bitpointer, bitstream, (unsigned char)((value >> i) & 1));\n}\n\nstatic void addBitsToStreamReversed(size_t* bitpointer, ucvector* bitstream, unsigned value, size_t nbits)\n{\n\tsize_t i;\n\tfor (i = 0; i != nbits; ++i) addBitToStream(bitpointer, bitstream, (unsigned char)((value >> (nbits - 1 - i)) & 1));\n}\n#endif /*LODEPNG_COMPILE_ENCODER*/\n\n#ifdef LODEPNG_COMPILE_DECODER\n\n#define READBIT(bitpointer, bitstream) ((bitstream[bitpointer >> 3] >> (bitpointer & 0x7)) & (unsigned char)1)\n\nstatic unsigned char readBitFromStream(size_t* bitpointer, const unsigned char* bitstream)\n{\n\tunsigned char result = (unsigned char)(READBIT(*bitpointer, bitstream));\n\t++(*bitpointer);\n\treturn result;\n}\n\nstatic unsigned readBitsFromStream(size_t* bitpointer, const unsigned char* bitstream, size_t nbits)\n{\n\tunsigned result = 0, i;\n\tfor (i = 0; i != nbits; ++i)\n\t{\n\t\tresult += ((unsigned)READBIT(*bitpointer, bitstream)) << i;\n\t\t++(*bitpointer);\n\t}\n\treturn result;\n}\n#endif /*LODEPNG_COMPILE_DECODER*/\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* / Deflate - Huffman                                                      / */\n/* ////////////////////////////////////////////////////////////////////////// */\n\n#define FIRST_LENGTH_CODE_INDEX 257\n#define LAST_LENGTH_CODE_INDEX 285\n/*256 literals, the end code, some length codes, and 2 unused codes*/\n#define NUM_DEFLATE_CODE_SYMBOLS 288\n/*the distance codes have their own symbols, 30 used, 2 unused*/\n#define NUM_DISTANCE_SYMBOLS 32\n/*the code length codes. 0-15: code lengths, 16: copy previous 3-6 times, 17: 3-10 zeros, 18: 11-138 zeros*/\n#define NUM_CODE_LENGTH_CODES 19\n\n/*the base lengths represented by codes 257-285*/\nstatic const unsigned LENGTHBASE[29]\n= { 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 15, 17, 19, 23, 27, 31, 35, 43, 51, 59,\n67, 83, 99, 115, 131, 163, 195, 227, 258 };\n\n/*the extra bits used by codes 257-285 (added to base length)*/\nstatic const unsigned LENGTHEXTRA[29]\n= { 0, 0, 0, 0, 0, 0, 0,  0,  1,  1,  1,  1,  2,  2,  2,  2,  3,  3,  3,  3,\n4,  4,  4,   4,   5,   5,   5,   5,   0 };\n\n/*the base backwards distances (the bits of distance codes appear after length codes and use their own huffman tree)*/\nstatic const unsigned DISTANCEBASE[30]\n= { 1, 2, 3, 4, 5, 7, 9, 13, 17, 25, 33, 49, 65, 97, 129, 193, 257, 385, 513,\n769, 1025, 1537, 2049, 3073, 4097, 6145, 8193, 12289, 16385, 24577 };\n\n/*the extra bits of backwards distances (added to base)*/\nstatic const unsigned DISTANCEEXTRA[30]\n= { 0, 0, 0, 0, 1, 1, 2,  2,  3,  3,  4,  4,  5,  5,   6,   6,   7,   7,   8,\n8,    9,    9,   10,   10,   11,   11,   12,    12,    13,    13 };\n\n/*the order in which \"code length alphabet code lengths\" are stored, out of this\nthe huffman tree of the dynamic huffman tree lengths is generated*/\nstatic const unsigned CLCL_ORDER[NUM_CODE_LENGTH_CODES]\n= { 16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15 };\n\n/* ////////////////////////////////////////////////////////////////////////// */\n\n/*\nHuffman tree struct, containing multiple representations of the tree\n*/\ntypedef struct HuffmanTree\n{\n\tunsigned* tree2d;\n\tunsigned* tree1d;\n\tunsigned* lengths; /*the lengths of the codes of the 1d-tree*/\n\tunsigned maxbitlen; /*maximum number of bits a single code can get*/\n\tunsigned numcodes; /*number of symbols in the alphabet = number of codes*/\n} HuffmanTree;\n\n/*function used for debug purposes to draw the tree in ascii art with C++*/\n/*\nstatic void HuffmanTree_draw(HuffmanTree* tree)\n{\nstd::cout << \"tree. length: \" << tree->numcodes << \" maxbitlen: \" << tree->maxbitlen << std::endl;\nfor(size_t i = 0; i != tree->tree1d.size; ++i)\n{\nif(tree->lengths.data[i])\nstd::cout << i << \" \" << tree->tree1d.data[i] << \" \" << tree->lengths.data[i] << std::endl;\n}\nstd::cout << std::endl;\n}*/\n\nstatic void HuffmanTree_init(HuffmanTree* tree)\n{\n\ttree->tree2d = 0;\n\ttree->tree1d = 0;\n\ttree->lengths = 0;\n}\n\nstatic void HuffmanTree_cleanup(HuffmanTree* tree)\n{\n\tlodepng_free(tree->tree2d);\n\tlodepng_free(tree->tree1d);\n\tlodepng_free(tree->lengths);\n}\n\n/*the tree representation used by the decoder. return value is error*/\nstatic unsigned HuffmanTree_make2DTree(HuffmanTree* tree)\n{\n\tunsigned nodefilled = 0; /*up to which node it is filled*/\n\tunsigned treepos = 0; /*position in the tree (1 of the numcodes columns)*/\n\tunsigned n, i;\n\n\ttree->tree2d = (unsigned*)lodepng_malloc(tree->numcodes * 2 * sizeof(unsigned));\n\tif (!tree->tree2d) return 83; /*alloc fail*/\n\n\t\t\t\t\t\t\t\t  /*\n\t\t\t\t\t\t\t\t  convert tree1d[] to tree2d[][]. In the 2D array, a value of 32767 means\n\t\t\t\t\t\t\t\t  uninited, a value >= numcodes is an address to another bit, a value < numcodes\n\t\t\t\t\t\t\t\t  is a code. The 2 rows are the 2 possible bit values (0 or 1), there are as\n\t\t\t\t\t\t\t\t  many columns as codes - 1.\n\t\t\t\t\t\t\t\t  A good huffman tree has N * 2 - 1 nodes, of which N - 1 are internal nodes.\n\t\t\t\t\t\t\t\t  Here, the internal nodes are stored (what their 0 and 1 option point to).\n\t\t\t\t\t\t\t\t  There is only memory for such good tree currently, if there are more nodes\n\t\t\t\t\t\t\t\t  (due to too long length codes), error 55 will happen\n\t\t\t\t\t\t\t\t  */\n\tfor (n = 0; n < tree->numcodes * 2; ++n)\n\t{\n\t\ttree->tree2d[n] = 32767; /*32767 here means the tree2d isn't filled there yet*/\n\t}\n\n\tfor (n = 0; n < tree->numcodes; ++n) /*the codes*/\n\t{\n\t\tfor (i = 0; i != tree->lengths[n]; ++i) /*the bits for this code*/\n\t\t{\n\t\t\tunsigned char bit = (unsigned char)((tree->tree1d[n] >> (tree->lengths[n] - i - 1)) & 1);\n\t\t\t/*oversubscribed, see comment in lodepng_error_text*/\n\t\t\tif (treepos > 2147483647 || treepos + 2 > tree->numcodes) return 55;\n\t\t\tif (tree->tree2d[2 * treepos + bit] == 32767) /*not yet filled in*/\n\t\t\t{\n\t\t\t\tif (i + 1 == tree->lengths[n]) /*last bit*/\n\t\t\t\t{\n\t\t\t\t\ttree->tree2d[2 * treepos + bit] = n; /*put the current code in it*/\n\t\t\t\t\ttreepos = 0;\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\t/*put address of the next step in here, first that address has to be found of course\n\t\t\t\t\t(it's just nodefilled + 1)...*/\n\t\t\t\t\t++nodefilled;\n\t\t\t\t\t/*addresses encoded with numcodes added to it*/\n\t\t\t\t\ttree->tree2d[2 * treepos + bit] = nodefilled + tree->numcodes;\n\t\t\t\t\ttreepos = nodefilled;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse treepos = tree->tree2d[2 * treepos + bit] - tree->numcodes;\n\t\t}\n\t}\n\n\tfor (n = 0; n < tree->numcodes * 2; ++n)\n\t{\n\t\tif (tree->tree2d[n] == 32767) tree->tree2d[n] = 0; /*remove possible remaining 32767's*/\n\t}\n\n\treturn 0;\n}\n\n/*\nSecond step for the ...makeFromLengths and ...makeFromFrequencies functions.\nnumcodes, lengths and maxbitlen must already be filled in correctly. return\nvalue is error.\n*/\nstatic unsigned HuffmanTree_makeFromLengths2(HuffmanTree* tree)\n{\n\tuivector blcount;\n\tuivector nextcode;\n\tunsigned error = 0;\n\tunsigned bits, n;\n\n\tuivector_init(&blcount);\n\tuivector_init(&nextcode);\n\n\ttree->tree1d = (unsigned*)lodepng_malloc(tree->numcodes * sizeof(unsigned));\n\tif (!tree->tree1d) error = 83; /*alloc fail*/\n\n\tif (!uivector_resizev(&blcount, tree->maxbitlen + 1, 0)\n\t\t|| !uivector_resizev(&nextcode, tree->maxbitlen + 1, 0))\n\t\terror = 83; /*alloc fail*/\n\n\tif (!error)\n\t{\n\t\t/*step 1: count number of instances of each code length*/\n\t\tfor (bits = 0; bits != tree->numcodes; ++bits) ++blcount.data[tree->lengths[bits]];\n\t\t/*step 2: generate the nextcode values*/\n\t\tfor (bits = 1; bits <= tree->maxbitlen; ++bits)\n\t\t{\n\t\t\tnextcode.data[bits] = (nextcode.data[bits - 1] + blcount.data[bits - 1]) << 1;\n\t\t}\n\t\t/*step 3: generate all the codes*/\n\t\tfor (n = 0; n != tree->numcodes; ++n)\n\t\t{\n\t\t\tif (tree->lengths[n] != 0) tree->tree1d[n] = nextcode.data[tree->lengths[n]]++;\n\t\t}\n\t}\n\n\tuivector_cleanup(&blcount);\n\tuivector_cleanup(&nextcode);\n\n\tif (!error) return HuffmanTree_make2DTree(tree);\n\telse return error;\n}\n\n/*\ngiven the code lengths (as stored in the PNG file), generate the tree as defined\nby Deflate. maxbitlen is the maximum bits that a code in the tree can have.\nreturn value is error.\n*/\nstatic unsigned HuffmanTree_makeFromLengths(HuffmanTree* tree, const unsigned* bitlen,\n\tsize_t numcodes, unsigned maxbitlen)\n{\n\tunsigned i;\n\ttree->lengths = (unsigned*)lodepng_malloc(numcodes * sizeof(unsigned));\n\tif (!tree->lengths) return 83; /*alloc fail*/\n\tfor (i = 0; i != numcodes; ++i) tree->lengths[i] = bitlen[i];\n\ttree->numcodes = (unsigned)numcodes; /*number of symbols*/\n\ttree->maxbitlen = maxbitlen;\n\treturn HuffmanTree_makeFromLengths2(tree);\n}\n\n#ifdef LODEPNG_COMPILE_ENCODER\n\n/*BPM: Boundary Package Merge, see \"A Fast and Space-Economical Algorithm for Length-Limited Coding\",\nJyrki Katajainen, Alistair Moffat, Andrew Turpin, 1995.*/\n\n/*chain node for boundary package merge*/\ntypedef struct BPMNode\n{\n\tint weight; /*the sum of all weights in this chain*/\n\tunsigned index; /*index of this leaf node (called \"count\" in the paper)*/\n\tstruct BPMNode* tail; /*the next nodes in this chain (null if last)*/\n\tint in_use;\n} BPMNode;\n\n/*lists of chains*/\ntypedef struct BPMLists\n{\n\t/*memory pool*/\n\tunsigned memsize;\n\tBPMNode* memory;\n\tunsigned numfree;\n\tunsigned nextfree;\n\tBPMNode** freelist;\n\t/*two heads of lookahead chains per list*/\n\tunsigned listsize;\n\tBPMNode** chains0;\n\tBPMNode** chains1;\n} BPMLists;\n\n/*creates a new chain node with the given parameters, from the memory in the lists */\nstatic BPMNode* bpmnode_create(BPMLists* lists, int weight, unsigned index, BPMNode* tail)\n{\n\tunsigned i;\n\tBPMNode* result;\n\n\t/*memory full, so garbage collect*/\n\tif (lists->nextfree >= lists->numfree)\n\t{\n\t\t/*mark only those that are in use*/\n\t\tfor (i = 0; i != lists->memsize; ++i) lists->memory[i].in_use = 0;\n\t\tfor (i = 0; i != lists->listsize; ++i)\n\t\t{\n\t\t\tBPMNode* node;\n\t\t\tfor (node = lists->chains0[i]; node != 0; node = node->tail) node->in_use = 1;\n\t\t\tfor (node = lists->chains1[i]; node != 0; node = node->tail) node->in_use = 1;\n\t\t}\n\t\t/*collect those that are free*/\n\t\tlists->numfree = 0;\n\t\tfor (i = 0; i != lists->memsize; ++i)\n\t\t{\n\t\t\tif (!lists->memory[i].in_use) lists->freelist[lists->numfree++] = &lists->memory[i];\n\t\t}\n\t\tlists->nextfree = 0;\n\t}\n\n\tresult = lists->freelist[lists->nextfree++];\n\tresult->weight = weight;\n\tresult->index = index;\n\tresult->tail = tail;\n\treturn result;\n}\n\n/*sort the leaves with stable mergesort*/\nstatic void bpmnode_sort(BPMNode* leaves, size_t num)\n{\n\tBPMNode* mem = (BPMNode*)lodepng_malloc(sizeof(*leaves) * num);\n\tsize_t width, counter = 0;\n\tfor (width = 1; width < num; width *= 2)\n\t{\n\t\tBPMNode* a = (counter & 1) ? mem : leaves;\n\t\tBPMNode* b = (counter & 1) ? leaves : mem;\n\t\tsize_t p;\n\t\tfor (p = 0; p < num; p += 2 * width)\n\t\t{\n\t\t\tsize_t q = (p + width > num) ? num : (p + width);\n\t\t\tsize_t r = (p + 2 * width > num) ? num : (p + 2 * width);\n\t\t\tsize_t i = p, j = q, k;\n\t\t\tfor (k = p; k < r; k++)\n\t\t\t{\n\t\t\t\tif (i < q && (j >= r || a[i].weight <= a[j].weight)) b[k] = a[i++];\n\t\t\t\telse b[k] = a[j++];\n\t\t\t}\n\t\t}\n\t\tcounter++;\n\t}\n\tif (counter & 1) memcpy(leaves, mem, sizeof(*leaves) * num);\n\tlodepng_free(mem);\n}\n\n/*Boundary Package Merge step, numpresent is the amount of leaves, and c is the current chain.*/\nstatic void boundaryPM(BPMLists* lists, BPMNode* leaves, size_t numpresent, int c, int num)\n{\n\tunsigned lastindex = lists->chains1[c]->index;\n\n\tif (c == 0)\n\t{\n\t\tif (lastindex >= numpresent) return;\n\t\tlists->chains0[c] = lists->chains1[c];\n\t\tlists->chains1[c] = bpmnode_create(lists, leaves[lastindex].weight, lastindex + 1, 0);\n\t}\n\telse\n\t{\n\t\t/*sum of the weights of the head nodes of the previous lookahead chains.*/\n\t\tint sum = lists->chains0[c - 1]->weight + lists->chains1[c - 1]->weight;\n\t\tlists->chains0[c] = lists->chains1[c];\n\t\tif (lastindex < numpresent && sum > leaves[lastindex].weight)\n\t\t{\n\t\t\tlists->chains1[c] = bpmnode_create(lists, leaves[lastindex].weight, lastindex + 1, lists->chains1[c]->tail);\n\t\t\treturn;\n\t\t}\n\t\tlists->chains1[c] = bpmnode_create(lists, sum, lastindex, lists->chains1[c - 1]);\n\t\t/*in the end we are only interested in the chain of the last list, so no\n\t\tneed to recurse if we're at the last one (this gives measurable speedup)*/\n\t\tif (num + 1 < (int)(2 * numpresent - 2))\n\t\t{\n\t\t\tboundaryPM(lists, leaves, numpresent, c - 1, num);\n\t\t\tboundaryPM(lists, leaves, numpresent, c - 1, num);\n\t\t}\n\t}\n}\n\nunsigned lodepng_huffman_code_lengths(unsigned* lengths, const unsigned* frequencies,\n\tsize_t numcodes, unsigned maxbitlen)\n{\n\tunsigned error = 0;\n\tunsigned i;\n\tsize_t numpresent = 0; /*number of symbols with non-zero frequency*/\n\tBPMNode* leaves; /*the symbols, only those with > 0 frequency*/\n\n\tif (numcodes == 0) return 80; /*error: a tree of 0 symbols is not supposed to be made*/\n\tif ((1u << maxbitlen) < (unsigned)numcodes) return 80; /*error: represent all symbols*/\n\n\tleaves = (BPMNode*)lodepng_malloc(numcodes * sizeof(*leaves));\n\tif (!leaves) return 83; /*alloc fail*/\n\n\tfor (i = 0; i != numcodes; ++i)\n\t{\n\t\tif (frequencies[i] > 0)\n\t\t{\n\t\t\tleaves[numpresent].weight = (int)frequencies[i];\n\t\t\tleaves[numpresent].index = i;\n\t\t\t++numpresent;\n\t\t}\n\t}\n\n\tfor (i = 0; i != numcodes; ++i) lengths[i] = 0;\n\n\t/*ensure at least two present symbols. There should be at least one symbol\n\taccording to RFC 1951 section 3.2.7. Some decoders incorrectly require two. To\n\tmake these work as well ensure there are at least two symbols. The\n\tPackage-Merge code below also doesn't work correctly if there's only one\n\tsymbol, it'd give it the theoritical 0 bits but in practice zlib wants 1 bit*/\n\tif (numpresent == 0)\n\t{\n\t\tlengths[0] = lengths[1] = 1; /*note that for RFC 1951 section 3.2.7, only lengths[0] = 1 is needed*/\n\t}\n\telse if (numpresent == 1)\n\t{\n\t\tlengths[leaves[0].index] = 1;\n\t\tlengths[leaves[0].index == 0 ? 1 : 0] = 1;\n\t}\n\telse\n\t{\n\t\tBPMLists lists;\n\t\tBPMNode* node;\n\n\t\tbpmnode_sort(leaves, numpresent);\n\n\t\tlists.listsize = maxbitlen;\n\t\tlists.memsize = 2 * maxbitlen * (maxbitlen + 1);\n\t\tlists.nextfree = 0;\n\t\tlists.numfree = lists.memsize;\n\t\tlists.memory = (BPMNode*)lodepng_malloc(lists.memsize * sizeof(*lists.memory));\n\t\tlists.freelist = (BPMNode**)lodepng_malloc(lists.memsize * sizeof(BPMNode*));\n\t\tlists.chains0 = (BPMNode**)lodepng_malloc(lists.listsize * sizeof(BPMNode*));\n\t\tlists.chains1 = (BPMNode**)lodepng_malloc(lists.listsize * sizeof(BPMNode*));\n\t\tif (!lists.memory || !lists.freelist || !lists.chains0 || !lists.chains1) error = 83; /*alloc fail*/\n\n\t\tif (!error)\n\t\t{\n\t\t\tfor (i = 0; i != lists.memsize; ++i) lists.freelist[i] = &lists.memory[i];\n\n\t\t\tbpmnode_create(&lists, leaves[0].weight, 1, 0);\n\t\t\tbpmnode_create(&lists, leaves[1].weight, 2, 0);\n\n\t\t\tfor (i = 0; i != lists.listsize; ++i)\n\t\t\t{\n\t\t\t\tlists.chains0[i] = &lists.memory[0];\n\t\t\t\tlists.chains1[i] = &lists.memory[1];\n\t\t\t}\n\n\t\t\t/*each boundaryPM call adds one chain to the last list, and we need 2 * numpresent - 2 chains.*/\n\t\t\tfor (i = 2; i != 2 * numpresent - 2; ++i) boundaryPM(&lists, leaves, numpresent, (int)maxbitlen - 1, (int)i);\n\n\t\t\tfor (node = lists.chains1[maxbitlen - 1]; node; node = node->tail)\n\t\t\t{\n\t\t\t\tfor (i = 0; i != node->index; ++i) ++lengths[leaves[i].index];\n\t\t\t}\n\t\t}\n\n\t\tlodepng_free(lists.memory);\n\t\tlodepng_free(lists.freelist);\n\t\tlodepng_free(lists.chains0);\n\t\tlodepng_free(lists.chains1);\n\t}\n\n\tlodepng_free(leaves);\n\treturn error;\n}\n\n/*Create the Huffman tree given the symbol frequencies*/\nstatic unsigned HuffmanTree_makeFromFrequencies(HuffmanTree* tree, const unsigned* frequencies,\n\tsize_t mincodes, size_t numcodes, unsigned maxbitlen)\n{\n\tunsigned error = 0;\n\twhile (!frequencies[numcodes - 1] && numcodes > mincodes) --numcodes; /*trim zeroes*/\n\ttree->maxbitlen = maxbitlen;\n\ttree->numcodes = (unsigned)numcodes; /*number of symbols*/\n\ttree->lengths = (unsigned*)lodepng_realloc(tree->lengths, numcodes * sizeof(unsigned));\n\tif (!tree->lengths) return 83; /*alloc fail*/\n\t\t\t\t\t\t\t\t   /*initialize all lengths to 0*/\n\tmemset(tree->lengths, 0, numcodes * sizeof(unsigned));\n\n\terror = lodepng_huffman_code_lengths(tree->lengths, frequencies, numcodes, maxbitlen);\n\tif (!error) error = HuffmanTree_makeFromLengths2(tree);\n\treturn error;\n}\n\nstatic unsigned HuffmanTree_getCode(const HuffmanTree* tree, unsigned index)\n{\n\treturn tree->tree1d[index];\n}\n\nstatic unsigned HuffmanTree_getLength(const HuffmanTree* tree, unsigned index)\n{\n\treturn tree->lengths[index];\n}\n#endif /*LODEPNG_COMPILE_ENCODER*/\n\n/*get the literal and length code tree of a deflated block with fixed tree, as per the deflate specification*/\nstatic unsigned generateFixedLitLenTree(HuffmanTree* tree)\n{\n\tunsigned i, error = 0;\n\tunsigned* bitlen = (unsigned*)lodepng_malloc(NUM_DEFLATE_CODE_SYMBOLS * sizeof(unsigned));\n\tif (!bitlen) return 83; /*alloc fail*/\n\n\t\t\t\t\t\t\t/*288 possible codes: 0-255=literals, 256=endcode, 257-285=lengthcodes, 286-287=unused*/\n\tfor (i = 0; i <= 143; ++i) bitlen[i] = 8;\n\tfor (i = 144; i <= 255; ++i) bitlen[i] = 9;\n\tfor (i = 256; i <= 279; ++i) bitlen[i] = 7;\n\tfor (i = 280; i <= 287; ++i) bitlen[i] = 8;\n\n\terror = HuffmanTree_makeFromLengths(tree, bitlen, NUM_DEFLATE_CODE_SYMBOLS, 15);\n\n\tlodepng_free(bitlen);\n\treturn error;\n}\n\n/*get the distance code tree of a deflated block with fixed tree, as specified in the deflate specification*/\nstatic unsigned generateFixedDistanceTree(HuffmanTree* tree)\n{\n\tunsigned i, error = 0;\n\tunsigned* bitlen = (unsigned*)lodepng_malloc(NUM_DISTANCE_SYMBOLS * sizeof(unsigned));\n\tif (!bitlen) return 83; /*alloc fail*/\n\n\t\t\t\t\t\t\t/*there are 32 distance codes, but 30-31 are unused*/\n\tfor (i = 0; i != NUM_DISTANCE_SYMBOLS; ++i) bitlen[i] = 5;\n\terror = HuffmanTree_makeFromLengths(tree, bitlen, NUM_DISTANCE_SYMBOLS, 15);\n\n\tlodepng_free(bitlen);\n\treturn error;\n}\n\n#ifdef LODEPNG_COMPILE_DECODER\n\n/*\nreturns the code, or (unsigned)(-1) if error happened\ninbitlength is the length of the complete buffer, in bits (so its byte length times 8)\n*/\nstatic unsigned huffmanDecodeSymbol(const unsigned char* in, size_t* bp,\n\tconst HuffmanTree* codetree, size_t inbitlength)\n{\n\tunsigned treepos = 0, ct;\n\tfor (;;)\n\t{\n\t\tif (*bp >= inbitlength) return (unsigned)(-1); /*error: end of input memory reached without endcode*/\n\t\t\t\t\t\t\t\t\t\t\t\t\t   /*\n\t\t\t\t\t\t\t\t\t\t\t\t\t   decode the symbol from the tree. The \"readBitFromStream\" code is inlined in\n\t\t\t\t\t\t\t\t\t\t\t\t\t   the expression below because this is the biggest bottleneck while decoding\n\t\t\t\t\t\t\t\t\t\t\t\t\t   */\n\t\tct = codetree->tree2d[(treepos << 1) + READBIT(*bp, in)];\n\t\t++(*bp);\n\t\tif (ct < codetree->numcodes) return ct; /*the symbol is decoded, return it*/\n\t\telse treepos = ct - codetree->numcodes; /*symbol not yet decoded, instead move tree position*/\n\n\t\tif (treepos >= codetree->numcodes) return (unsigned)(-1); /*error: it appeared outside the codetree*/\n\t}\n}\n#endif /*LODEPNG_COMPILE_DECODER*/\n\n#ifdef LODEPNG_COMPILE_DECODER\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* / Inflator (Decompressor)                                                / */\n/* ////////////////////////////////////////////////////////////////////////// */\n\n/*get the tree of a deflated block with fixed tree, as specified in the deflate specification*/\nstatic void getTreeInflateFixed(HuffmanTree* tree_ll, HuffmanTree* tree_d)\n{\n\t/*TODO: check for out of memory errors*/\n\tgenerateFixedLitLenTree(tree_ll);\n\tgenerateFixedDistanceTree(tree_d);\n}\n\n/*get the tree of a deflated block with dynamic tree, the tree itself is also Huffman compressed with a known tree*/\nstatic unsigned getTreeInflateDynamic(HuffmanTree* tree_ll, HuffmanTree* tree_d,\n\tconst unsigned char* in, size_t* bp, size_t inlength)\n{\n\t/*make sure that length values that aren't filled in will be 0, or a wrong tree will be generated*/\n\tunsigned error = 0;\n\tunsigned n, HLIT, HDIST, HCLEN, i;\n\tsize_t inbitlength = inlength * 8;\n\n\t/*see comments in deflateDynamic for explanation of the context and these variables, it is analogous*/\n\tunsigned* bitlen_ll = 0; /*lit,len code lengths*/\n\tunsigned* bitlen_d = 0; /*dist code lengths*/\n\t\t\t\t\t\t\t/*code length code lengths (\"clcl\"), the bit lengths of the huffman tree used to compress bitlen_ll and bitlen_d*/\n\tunsigned* bitlen_cl = 0;\n\tHuffmanTree tree_cl; /*the code tree for code length codes (the huffman tree for compressed huffman trees)*/\n\n\tif ((*bp) + 14 > (inlength << 3)) return 49; /*error: the bit pointer is or will go past the memory*/\n\n\t\t\t\t\t\t\t\t\t\t\t\t /*number of literal/length codes + 257. Unlike the spec, the value 257 is added to it here already*/\n\tHLIT = readBitsFromStream(bp, in, 5) + 257;\n\t/*number of distance codes. Unlike the spec, the value 1 is added to it here already*/\n\tHDIST = readBitsFromStream(bp, in, 5) + 1;\n\t/*number of code length codes. Unlike the spec, the value 4 is added to it here already*/\n\tHCLEN = readBitsFromStream(bp, in, 4) + 4;\n\n\tif ((*bp) + HCLEN * 3 > (inlength << 3)) return 50; /*error: the bit pointer is or will go past the memory*/\n\n\tHuffmanTree_init(&tree_cl);\n\n\twhile (!error)\n\t{\n\t\t/*read the code length codes out of 3 * (amount of code length codes) bits*/\n\n\t\tbitlen_cl = (unsigned*)lodepng_malloc(NUM_CODE_LENGTH_CODES * sizeof(unsigned));\n\t\tif (!bitlen_cl) ERROR_BREAK(83 /*alloc fail*/);\n\n\t\tfor (i = 0; i != NUM_CODE_LENGTH_CODES; ++i)\n\t\t{\n\t\t\tif (i < HCLEN) bitlen_cl[CLCL_ORDER[i]] = readBitsFromStream(bp, in, 3);\n\t\t\telse bitlen_cl[CLCL_ORDER[i]] = 0; /*if not, it must stay 0*/\n\t\t}\n\n\t\terror = HuffmanTree_makeFromLengths(&tree_cl, bitlen_cl, NUM_CODE_LENGTH_CODES, 7);\n\t\tif (error) break;\n\n\t\t/*now we can use this tree to read the lengths for the tree that this function will return*/\n\t\tbitlen_ll = (unsigned*)lodepng_malloc(NUM_DEFLATE_CODE_SYMBOLS * sizeof(unsigned));\n\t\tbitlen_d = (unsigned*)lodepng_malloc(NUM_DISTANCE_SYMBOLS * sizeof(unsigned));\n\t\tif (!bitlen_ll || !bitlen_d) ERROR_BREAK(83 /*alloc fail*/);\n\t\tfor (i = 0; i != NUM_DEFLATE_CODE_SYMBOLS; ++i) bitlen_ll[i] = 0;\n\t\tfor (i = 0; i != NUM_DISTANCE_SYMBOLS; ++i) bitlen_d[i] = 0;\n\n\t\t/*i is the current symbol we're reading in the part that contains the code lengths of lit/len and dist codes*/\n\t\ti = 0;\n\t\twhile (i < HLIT + HDIST)\n\t\t{\n\t\t\tunsigned code = huffmanDecodeSymbol(in, bp, &tree_cl, inbitlength);\n\t\t\tif (code <= 15) /*a length code*/\n\t\t\t{\n\t\t\t\tif (i < HLIT) bitlen_ll[i] = code;\n\t\t\t\telse bitlen_d[i - HLIT] = code;\n\t\t\t\t++i;\n\t\t\t}\n\t\t\telse if (code == 16) /*repeat previous*/\n\t\t\t{\n\t\t\t\tunsigned replength = 3; /*read in the 2 bits that indicate repeat length (3-6)*/\n\t\t\t\tunsigned value; /*set value to the previous code*/\n\n\t\t\t\tif (i == 0) ERROR_BREAK(54); /*can't repeat previous if i is 0*/\n\n\t\t\t\tif ((*bp + 2) > inbitlength) ERROR_BREAK(50); /*error, bit pointer jumps past memory*/\n\t\t\t\treplength += readBitsFromStream(bp, in, 2);\n\n\t\t\t\tif (i < HLIT + 1) value = bitlen_ll[i - 1];\n\t\t\t\telse value = bitlen_d[i - HLIT - 1];\n\t\t\t\t/*repeat this value in the next lengths*/\n\t\t\t\tfor (n = 0; n < replength; ++n)\n\t\t\t\t{\n\t\t\t\t\tif (i >= HLIT + HDIST) ERROR_BREAK(13); /*error: i is larger than the amount of codes*/\n\t\t\t\t\tif (i < HLIT) bitlen_ll[i] = value;\n\t\t\t\t\telse bitlen_d[i - HLIT] = value;\n\t\t\t\t\t++i;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse if (code == 17) /*repeat \"0\" 3-10 times*/\n\t\t\t{\n\t\t\t\tunsigned replength = 3; /*read in the bits that indicate repeat length*/\n\t\t\t\tif ((*bp + 3) > inbitlength) ERROR_BREAK(50); /*error, bit pointer jumps past memory*/\n\t\t\t\treplength += readBitsFromStream(bp, in, 3);\n\n\t\t\t\t/*repeat this value in the next lengths*/\n\t\t\t\tfor (n = 0; n < replength; ++n)\n\t\t\t\t{\n\t\t\t\t\tif (i >= HLIT + HDIST) ERROR_BREAK(14); /*error: i is larger than the amount of codes*/\n\n\t\t\t\t\tif (i < HLIT) bitlen_ll[i] = 0;\n\t\t\t\t\telse bitlen_d[i - HLIT] = 0;\n\t\t\t\t\t++i;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse if (code == 18) /*repeat \"0\" 11-138 times*/\n\t\t\t{\n\t\t\t\tunsigned replength = 11; /*read in the bits that indicate repeat length*/\n\t\t\t\tif ((*bp + 7) > inbitlength) ERROR_BREAK(50); /*error, bit pointer jumps past memory*/\n\t\t\t\treplength += readBitsFromStream(bp, in, 7);\n\n\t\t\t\t/*repeat this value in the next lengths*/\n\t\t\t\tfor (n = 0; n < replength; ++n)\n\t\t\t\t{\n\t\t\t\t\tif (i >= HLIT + HDIST) ERROR_BREAK(15); /*error: i is larger than the amount of codes*/\n\n\t\t\t\t\tif (i < HLIT) bitlen_ll[i] = 0;\n\t\t\t\t\telse bitlen_d[i - HLIT] = 0;\n\t\t\t\t\t++i;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse /*if(code == (unsigned)(-1))*/ /*huffmanDecodeSymbol returns (unsigned)(-1) in case of error*/\n\t\t\t{\n\t\t\t\tif (code == (unsigned)(-1))\n\t\t\t\t{\n\t\t\t\t\t/*return error code 10 or 11 depending on the situation that happened in huffmanDecodeSymbol\n\t\t\t\t\t(10=no endcode, 11=wrong jump outside of tree)*/\n\t\t\t\t\terror = (*bp) > inbitlength ? 10 : 11;\n\t\t\t\t}\n\t\t\t\telse error = 16; /*unexisting code, this can never happen*/\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (error) break;\n\n\t\tif (bitlen_ll[256] == 0) ERROR_BREAK(64); /*the length of the end code 256 must be larger than 0*/\n\n\t\t\t\t\t\t\t\t\t\t\t\t  /*now we've finally got HLIT and HDIST, so generate the code trees, and the function is done*/\n\t\terror = HuffmanTree_makeFromLengths(tree_ll, bitlen_ll, NUM_DEFLATE_CODE_SYMBOLS, 15);\n\t\tif (error) break;\n\t\terror = HuffmanTree_makeFromLengths(tree_d, bitlen_d, NUM_DISTANCE_SYMBOLS, 15);\n\n\t\tbreak; /*end of error-while*/\n\t}\n\n\tlodepng_free(bitlen_cl);\n\tlodepng_free(bitlen_ll);\n\tlodepng_free(bitlen_d);\n\tHuffmanTree_cleanup(&tree_cl);\n\n\treturn error;\n}\n\n/*inflate a block with dynamic of fixed Huffman tree*/\nstatic unsigned inflateHuffmanBlock(ucvector* out, const unsigned char* in, size_t* bp,\n\tsize_t* pos, size_t inlength, unsigned btype)\n{\n\tunsigned error = 0;\n\tHuffmanTree tree_ll; /*the huffman tree for literal and length codes*/\n\tHuffmanTree tree_d; /*the huffman tree for distance codes*/\n\tsize_t inbitlength = inlength * 8;\n\n\tHuffmanTree_init(&tree_ll);\n\tHuffmanTree_init(&tree_d);\n\n\tif (btype == 1) getTreeInflateFixed(&tree_ll, &tree_d);\n\telse if (btype == 2) error = getTreeInflateDynamic(&tree_ll, &tree_d, in, bp, inlength);\n\n\twhile (!error) /*decode all symbols until end reached, breaks at end code*/\n\t{\n\t\t/*code_ll is literal, length or end code*/\n\t\tunsigned code_ll = huffmanDecodeSymbol(in, bp, &tree_ll, inbitlength);\n\t\tif (code_ll <= 255) /*literal symbol*/\n\t\t{\n\t\t\t/*ucvector_push_back would do the same, but for some reason the two lines below run 10% faster*/\n\t\t\tif (!ucvector_resize(out, (*pos) + 1)) ERROR_BREAK(83 /*alloc fail*/);\n\t\t\tout->data[*pos] = (unsigned char)code_ll;\n\t\t\t++(*pos);\n\t\t}\n\t\telse if (code_ll >= FIRST_LENGTH_CODE_INDEX && code_ll <= LAST_LENGTH_CODE_INDEX) /*length code*/\n\t\t{\n\t\t\tunsigned code_d, distance;\n\t\t\tunsigned numextrabits_l, numextrabits_d; /*extra bits for length and distance*/\n\t\t\tsize_t start, forward, backward, length;\n\n\t\t\t/*part 1: get length base*/\n\t\t\tlength = LENGTHBASE[code_ll - FIRST_LENGTH_CODE_INDEX];\n\n\t\t\t/*part 2: get extra bits and add the value of that to length*/\n\t\t\tnumextrabits_l = LENGTHEXTRA[code_ll - FIRST_LENGTH_CODE_INDEX];\n\t\t\tif ((*bp + numextrabits_l) > inbitlength) ERROR_BREAK(51); /*error, bit pointer will jump past memory*/\n\t\t\tlength += readBitsFromStream(bp, in, numextrabits_l);\n\n\t\t\t/*part 3: get distance code*/\n\t\t\tcode_d = huffmanDecodeSymbol(in, bp, &tree_d, inbitlength);\n\t\t\tif (code_d > 29)\n\t\t\t{\n\t\t\t\tif (code_d == (unsigned)(-1)) /*huffmanDecodeSymbol returns (unsigned)(-1) in case of error*/\n\t\t\t\t{\n\t\t\t\t\t/*return error code 10 or 11 depending on the situation that happened in huffmanDecodeSymbol\n\t\t\t\t\t(10=no endcode, 11=wrong jump outside of tree)*/\n\t\t\t\t\terror = (*bp) > inlength * 8 ? 10 : 11;\n\t\t\t\t}\n\t\t\t\telse error = 18; /*error: invalid distance code (30-31 are never used)*/\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tdistance = DISTANCEBASE[code_d];\n\n\t\t\t/*part 4: get extra bits from distance*/\n\t\t\tnumextrabits_d = DISTANCEEXTRA[code_d];\n\t\t\tif ((*bp + numextrabits_d) > inbitlength) ERROR_BREAK(51); /*error, bit pointer will jump past memory*/\n\t\t\tdistance += readBitsFromStream(bp, in, numextrabits_d);\n\n\t\t\t/*part 5: fill in all the out[n] values based on the length and dist*/\n\t\t\tstart = (*pos);\n\t\t\tif (distance > start) ERROR_BREAK(52); /*too long backward distance*/\n\t\t\tbackward = start - distance;\n\n\t\t\tif (!ucvector_resize(out, (*pos) + length)) ERROR_BREAK(83 /*alloc fail*/);\n\t\t\tif (distance < length) {\n\t\t\t\tfor (forward = 0; forward < length; ++forward)\n\t\t\t\t{\n\t\t\t\t\tout->data[(*pos)++] = out->data[backward++];\n\t\t\t\t}\n\t\t\t}\n\t\t\telse {\n\t\t\t\tmemcpy(out->data + *pos, out->data + backward, length);\n\t\t\t\t*pos += length;\n\t\t\t}\n\t\t}\n\t\telse if (code_ll == 256)\n\t\t{\n\t\t\tbreak; /*end code, break the loop*/\n\t\t}\n\t\telse /*if(code == (unsigned)(-1))*/ /*huffmanDecodeSymbol returns (unsigned)(-1) in case of error*/\n\t\t{\n\t\t\t/*return error code 10 or 11 depending on the situation that happened in huffmanDecodeSymbol\n\t\t\t(10=no endcode, 11=wrong jump outside of tree)*/\n\t\t\terror = ((*bp) > inlength * 8) ? 10 : 11;\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tHuffmanTree_cleanup(&tree_ll);\n\tHuffmanTree_cleanup(&tree_d);\n\n\treturn error;\n}\n\nstatic unsigned inflateNoCompression(ucvector* out, const unsigned char* in, size_t* bp, size_t* pos, size_t inlength)\n{\n\tsize_t p;\n\tunsigned LEN, NLEN, n, error = 0;\n\n\t/*go to first boundary of byte*/\n\twhile (((*bp) & 0x7) != 0) ++(*bp);\n\tp = (*bp) / 8; /*byte position*/\n\n\t\t\t\t   /*read LEN (2 bytes) and NLEN (2 bytes)*/\n\tif (p + 4 >= inlength) return 52; /*error, bit pointer will jump past memory*/\n\tLEN = in[p] + 256u * in[p + 1]; p += 2;\n\tNLEN = in[p] + 256u * in[p + 1]; p += 2;\n\n\t/*check if 16-bit NLEN is really the one's complement of LEN*/\n\tif (LEN + NLEN != 65535) return 21; /*error: NLEN is not one's complement of LEN*/\n\n\tif (!ucvector_resize(out, (*pos) + LEN)) return 83; /*alloc fail*/\n\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t/*read the literal data: LEN bytes are now stored in the out buffer*/\n\tif (p + LEN > inlength) return 23; /*error: reading outside of in buffer*/\n\tfor (n = 0; n < LEN; ++n) out->data[(*pos)++] = in[p++];\n\n\t(*bp) = p * 8;\n\n\treturn error;\n}\n\nstatic unsigned lodepng_inflatev(ucvector* out,\n\tconst unsigned char* in, size_t insize,\n\tconst LodePNGDecompressSettings* settings)\n{\n\t/*bit pointer in the \"in\" data, current byte is bp >> 3, current bit is bp & 0x7 (from lsb to msb of the byte)*/\n\tsize_t bp = 0;\n\tunsigned BFINAL = 0;\n\tsize_t pos = 0; /*byte position in the out buffer*/\n\tunsigned error = 0;\n\n\t(void)settings;\n\n\twhile (!BFINAL)\n\t{\n\t\tunsigned BTYPE;\n\t\tif (bp + 2 >= insize * 8) return 52; /*error, bit pointer will jump past memory*/\n\t\tBFINAL = readBitFromStream(&bp, in);\n\t\tBTYPE = 1u * readBitFromStream(&bp, in);\n\t\tBTYPE += 2u * readBitFromStream(&bp, in);\n\n\t\tif (BTYPE == 3) return 20; /*error: invalid BTYPE*/\n\t\telse if (BTYPE == 0) error = inflateNoCompression(out, in, &bp, &pos, insize); /*no compression*/\n\t\telse error = inflateHuffmanBlock(out, in, &bp, &pos, insize, BTYPE); /*compression, BTYPE 01 or 10*/\n\n\t\tif (error) return error;\n\t}\n\n\treturn error;\n}\n\nunsigned lodepng_inflate(unsigned char** out, size_t* outsize,\n\tconst unsigned char* in, size_t insize,\n\tconst LodePNGDecompressSettings* settings)\n{\n\tunsigned error;\n\tucvector v;\n\tucvector_init_buffer(&v, *out, *outsize);\n\terror = lodepng_inflatev(&v, in, insize, settings);\n\t*out = v.data;\n\t*outsize = v.size;\n\treturn error;\n}\n\nstatic unsigned inflate(unsigned char** out, size_t* outsize,\n\tconst unsigned char* in, size_t insize,\n\tconst LodePNGDecompressSettings* settings)\n{\n\tif (settings->custom_inflate)\n\t{\n\t\treturn settings->custom_inflate(out, outsize, in, insize, settings);\n\t}\n\telse\n\t{\n\t\treturn lodepng_inflate(out, outsize, in, insize, settings);\n\t}\n}\n\n#endif /*LODEPNG_COMPILE_DECODER*/\n\n#ifdef LODEPNG_COMPILE_ENCODER\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* / Deflator (Compressor)                                                  / */\n/* ////////////////////////////////////////////////////////////////////////// */\n\nstatic const size_t MAX_SUPPORTED_DEFLATE_LENGTH = 258;\n\n/*bitlen is the size in bits of the code*/\nstatic void addHuffmanSymbol(size_t* bp, ucvector* compressed, unsigned code, unsigned bitlen)\n{\n\taddBitsToStreamReversed(bp, compressed, code, bitlen);\n}\n\n/*search the index in the array, that has the largest value smaller than or equal to the given value,\ngiven array must be sorted (if no value is smaller, it returns the size of the given array)*/\nstatic size_t searchCodeIndex(const unsigned* array, size_t array_size, size_t value)\n{\n\t/*binary search (only small gain over linear). TODO: use CPU log2 instruction for getting symbols instead*/\n\tsize_t left = 1;\n\tsize_t right = array_size - 1;\n\n\twhile (left <= right) {\n\t\tsize_t mid = (left + right) >> 1;\n\t\tif (array[mid] >= value) right = mid - 1;\n\t\telse left = mid + 1;\n\t}\n\tif (left >= array_size || array[left] > value) left--;\n\treturn left;\n}\n\nstatic void addLengthDistance(uivector* values, size_t length, size_t distance)\n{\n\t/*values in encoded vector are those used by deflate:\n\t0-255: literal bytes\n\t256: end\n\t257-285: length/distance pair (length code, followed by extra length bits, distance code, extra distance bits)\n\t286-287: invalid*/\n\n\tunsigned length_code = (unsigned)searchCodeIndex(LENGTHBASE, 29, length);\n\tunsigned extra_length = (unsigned)(length - LENGTHBASE[length_code]);\n\tunsigned dist_code = (unsigned)searchCodeIndex(DISTANCEBASE, 30, distance);\n\tunsigned extra_distance = (unsigned)(distance - DISTANCEBASE[dist_code]);\n\n\tuivector_push_back(values, length_code + FIRST_LENGTH_CODE_INDEX);\n\tuivector_push_back(values, extra_length);\n\tuivector_push_back(values, dist_code);\n\tuivector_push_back(values, extra_distance);\n}\n\n/*3 bytes of data get encoded into two bytes. The hash cannot use more than 3\nbytes as input because 3 is the minimum match length for deflate*/\nstatic const unsigned HASH_NUM_VALUES = 65536;\nstatic const unsigned HASH_BIT_MASK = 65535; /*HASH_NUM_VALUES - 1, but C90 does not like that as initializer*/\n\ntypedef struct Hash\n{\n\tint* head; /*hash value to head circular pos - can be outdated if went around window*/\n\t\t\t   /*circular pos to prev circular pos*/\n\tunsigned short* chain;\n\tint* val; /*circular pos to hash value*/\n\n\t\t\t  /*TODO: do this not only for zeros but for any repeated byte. However for PNG\n\t\t\t  it's always going to be the zeros that dominate, so not important for PNG*/\n\tint* headz; /*similar to head, but for chainz*/\n\tunsigned short* chainz; /*those with same amount of zeros*/\n\tunsigned short* zeros; /*length of zeros streak, used as a second hash chain*/\n} Hash;\n\nstatic unsigned hash_init(Hash* hash, unsigned windowsize)\n{\n\tunsigned i;\n\thash->head = (int*)lodepng_malloc(sizeof(int) * HASH_NUM_VALUES);\n\thash->val = (int*)lodepng_malloc(sizeof(int) * windowsize);\n\thash->chain = (unsigned short*)lodepng_malloc(sizeof(unsigned short) * windowsize);\n\n\thash->zeros = (unsigned short*)lodepng_malloc(sizeof(unsigned short) * windowsize);\n\thash->headz = (int*)lodepng_malloc(sizeof(int) * (MAX_SUPPORTED_DEFLATE_LENGTH + 1));\n\thash->chainz = (unsigned short*)lodepng_malloc(sizeof(unsigned short) * windowsize);\n\n\tif (!hash->head || !hash->chain || !hash->val || !hash->headz || !hash->chainz || !hash->zeros)\n\t{\n\t\treturn 83; /*alloc fail*/\n\t}\n\n\t/*initialize hash table*/\n\tfor (i = 0; i != HASH_NUM_VALUES; ++i) hash->head[i] = -1;\n\tfor (i = 0; i != windowsize; ++i) hash->val[i] = -1;\n\tfor (i = 0; i != windowsize; ++i) hash->chain[i] = i; /*same value as index indicates uninitialized*/\n\n\tfor (i = 0; i <= MAX_SUPPORTED_DEFLATE_LENGTH; ++i) hash->headz[i] = -1;\n\tfor (i = 0; i != windowsize; ++i) hash->chainz[i] = i; /*same value as index indicates uninitialized*/\n\n\treturn 0;\n}\n\nstatic void hash_cleanup(Hash* hash)\n{\n\tlodepng_free(hash->head);\n\tlodepng_free(hash->val);\n\tlodepng_free(hash->chain);\n\n\tlodepng_free(hash->zeros);\n\tlodepng_free(hash->headz);\n\tlodepng_free(hash->chainz);\n}\n\n\n\nstatic unsigned getHash(const unsigned char* data, size_t size, size_t pos)\n{\n\tunsigned result = 0;\n\tif (pos + 2 < size)\n\t{\n\t\t/*A simple shift and xor hash is used. Since the data of PNGs is dominated\n\t\tby zeroes due to the filters, a better hash does not have a significant\n\t\teffect on speed in traversing the chain, and causes more time spend on\n\t\tcalculating the hash.*/\n\t\tresult ^= (unsigned)(data[pos + 0] << 0u);\n\t\tresult ^= (unsigned)(data[pos + 1] << 4u);\n\t\tresult ^= (unsigned)(data[pos + 2] << 8u);\n\t}\n\telse {\n\t\tsize_t amount, i;\n\t\tif (pos >= size) return 0;\n\t\tamount = size - pos;\n\t\tfor (i = 0; i != amount; ++i) result ^= (unsigned)(data[pos + i] << (i * 8u));\n\t}\n\treturn result & HASH_BIT_MASK;\n}\n\nstatic unsigned countZeros(const unsigned char* data, size_t size, size_t pos)\n{\n\tconst unsigned char* start = data + pos;\n\tconst unsigned char* end = start + MAX_SUPPORTED_DEFLATE_LENGTH;\n\tif (end > data + size) end = data + size;\n\tdata = start;\n\twhile (data != end && *data == 0) ++data;\n\t/*subtracting two addresses returned as 32-bit number (max value is MAX_SUPPORTED_DEFLATE_LENGTH)*/\n\treturn (unsigned)(data - start);\n}\n\n/*wpos = pos & (windowsize - 1)*/\nstatic void updateHashChain(Hash* hash, size_t wpos, unsigned hashval, unsigned short numzeros)\n{\n\thash->val[wpos] = (int)hashval;\n\tif (hash->head[hashval] != -1) hash->chain[wpos] = hash->head[hashval];\n\thash->head[hashval] = (int)wpos;\n\n\thash->zeros[wpos] = numzeros;\n\tif (hash->headz[numzeros] != -1) hash->chainz[wpos] = hash->headz[numzeros];\n\thash->headz[numzeros] = (int)wpos;\n}\n\n/*\nLZ77-encode the data. Return value is error code. The input are raw bytes, the output\nis in the form of unsigned integers with codes representing for example literal bytes, or\nlength/distance pairs.\nIt uses a hash table technique to let it encode faster. When doing LZ77 encoding, a\nsliding window (of windowsize) is used, and all past bytes in that window can be used as\nthe \"dictionary\". A brute force search through all possible distances would be slow, and\nthis hash technique is one out of several ways to speed this up.\n*/\nstatic unsigned encodeLZ77(uivector* out, Hash* hash,\n\tconst unsigned char* in, size_t inpos, size_t insize, unsigned windowsize,\n\tunsigned minmatch, unsigned nicematch, unsigned lazymatching)\n{\n\tsize_t pos;\n\tunsigned i, error = 0;\n\t/*for large window lengths, assume the user wants no compression loss. Otherwise, max hash chain length speedup.*/\n\tunsigned maxchainlength = windowsize >= 8192 ? windowsize : windowsize / 8;\n\tunsigned maxlazymatch = windowsize >= 8192 ? MAX_SUPPORTED_DEFLATE_LENGTH : 64;\n\n\tunsigned usezeros = 1; /*not sure if setting it to false for windowsize < 8192 is better or worse*/\n\tunsigned numzeros = 0;\n\n\tunsigned offset; /*the offset represents the distance in LZ77 terminology*/\n\tunsigned length;\n\tunsigned lazy = 0;\n\tunsigned lazylength = 0, lazyoffset = 0;\n\tunsigned hashval;\n\tunsigned current_offset, current_length;\n\tunsigned prev_offset;\n\tconst unsigned char *lastptr, *foreptr, *backptr;\n\tunsigned hashpos;\n\n\tif (windowsize == 0 || windowsize > 32768) return 60; /*error: windowsize smaller/larger than allowed*/\n\tif ((windowsize & (windowsize - 1)) != 0) return 90; /*error: must be power of two*/\n\n\tif (nicematch > MAX_SUPPORTED_DEFLATE_LENGTH) nicematch = MAX_SUPPORTED_DEFLATE_LENGTH;\n\n\tfor (pos = inpos; pos < insize; ++pos)\n\t{\n\t\tsize_t wpos = pos & (windowsize - 1); /*position for in 'circular' hash buffers*/\n\t\tunsigned chainlength = 0;\n\n\t\thashval = getHash(in, insize, pos);\n\n\t\tif (usezeros && hashval == 0)\n\t\t{\n\t\t\tif (numzeros == 0) numzeros = countZeros(in, insize, pos);\n\t\t\telse if (pos + numzeros > insize || in[pos + numzeros - 1] != 0) --numzeros;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tnumzeros = 0;\n\t\t}\n\n\t\tupdateHashChain(hash, wpos, hashval, numzeros);\n\n\t\t/*the length and offset found for the current position*/\n\t\tlength = 0;\n\t\toffset = 0;\n\n\t\thashpos = hash->chain[wpos];\n\n\t\tlastptr = &in[insize < pos + MAX_SUPPORTED_DEFLATE_LENGTH ? insize : pos + MAX_SUPPORTED_DEFLATE_LENGTH];\n\n\t\t/*search for the longest string*/\n\t\tprev_offset = 0;\n\t\tfor (;;)\n\t\t{\n\t\t\tif (chainlength++ >= maxchainlength) break;\n\t\t\tcurrent_offset = (unsigned)(hashpos <= wpos ? wpos - hashpos : wpos - hashpos + windowsize);\n\n\t\t\tif (current_offset < prev_offset) break; /*stop when went completely around the circular buffer*/\n\t\t\tprev_offset = current_offset;\n\t\t\tif (current_offset > 0)\n\t\t\t{\n\t\t\t\t/*test the next characters*/\n\t\t\t\tforeptr = &in[pos];\n\t\t\t\tbackptr = &in[pos - current_offset];\n\n\t\t\t\t/*common case in PNGs is lots of zeros. Quickly skip over them as a speedup*/\n\t\t\t\tif (numzeros >= 3)\n\t\t\t\t{\n\t\t\t\t\tunsigned skip = hash->zeros[hashpos];\n\t\t\t\t\tif (skip > numzeros) skip = numzeros;\n\t\t\t\t\tbackptr += skip;\n\t\t\t\t\tforeptr += skip;\n\t\t\t\t}\n\n\t\t\t\twhile (foreptr != lastptr && *backptr == *foreptr) /*maximum supported length by deflate is max length*/\n\t\t\t\t{\n\t\t\t\t\t++backptr;\n\t\t\t\t\t++foreptr;\n\t\t\t\t}\n\t\t\t\tcurrent_length = (unsigned)(foreptr - &in[pos]);\n\n\t\t\t\tif (current_length > length)\n\t\t\t\t{\n\t\t\t\t\tlength = current_length; /*the longest length*/\n\t\t\t\t\toffset = current_offset; /*the offset that is related to this longest length*/\n\t\t\t\t\t\t\t\t\t\t\t /*jump out once a length of max length is found (speed gain). This also jumps\n\t\t\t\t\t\t\t\t\t\t\t out if length is MAX_SUPPORTED_DEFLATE_LENGTH*/\n\t\t\t\t\tif (current_length >= nicematch) break;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (hashpos == hash->chain[hashpos]) break;\n\n\t\t\tif (numzeros >= 3 && length > numzeros)\n\t\t\t{\n\t\t\t\thashpos = hash->chainz[hashpos];\n\t\t\t\tif (hash->zeros[hashpos] != numzeros) break;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\thashpos = hash->chain[hashpos];\n\t\t\t\t/*outdated hash value, happens if particular value was not encountered in whole last window*/\n\t\t\t\tif (hash->val[hashpos] != (int)hashval) break;\n\t\t\t}\n\t\t}\n\n\t\tif (lazymatching)\n\t\t{\n\t\t\tif (!lazy && length >= 3 && length <= maxlazymatch && length < MAX_SUPPORTED_DEFLATE_LENGTH)\n\t\t\t{\n\t\t\t\tlazy = 1;\n\t\t\t\tlazylength = length;\n\t\t\t\tlazyoffset = offset;\n\t\t\t\tcontinue; /*try the next byte*/\n\t\t\t}\n\t\t\tif (lazy)\n\t\t\t{\n\t\t\t\tlazy = 0;\n\t\t\t\tif (pos == 0) ERROR_BREAK(81);\n\t\t\t\tif (length > lazylength + 1)\n\t\t\t\t{\n\t\t\t\t\t/*push the previous character as literal*/\n\t\t\t\t\tif (!uivector_push_back(out, in[pos - 1])) ERROR_BREAK(83 /*alloc fail*/);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tlength = lazylength;\n\t\t\t\t\toffset = lazyoffset;\n\t\t\t\t\thash->head[hashval] = -1; /*the same hashchain update will be done, this ensures no wrong alteration*/\n\t\t\t\t\thash->headz[numzeros] = -1; /*idem*/\n\t\t\t\t\t--pos;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (length >= 3 && offset > windowsize) ERROR_BREAK(86 /*too big (or overflown negative) offset*/);\n\n\t\t/*encode it as length/distance pair or literal value*/\n\t\tif (length < 3) /*only lengths of 3 or higher are supported as length/distance pair*/\n\t\t{\n\t\t\tif (!uivector_push_back(out, in[pos])) ERROR_BREAK(83 /*alloc fail*/);\n\t\t}\n\t\telse if (length < minmatch || (length == 3 && offset > 4096))\n\t\t{\n\t\t\t/*compensate for the fact that longer offsets have more extra bits, a\n\t\t\tlength of only 3 may be not worth it then*/\n\t\t\tif (!uivector_push_back(out, in[pos])) ERROR_BREAK(83 /*alloc fail*/);\n\t\t}\n\t\telse\n\t\t{\n\t\t\taddLengthDistance(out, length, offset);\n\t\t\tfor (i = 1; i < length; ++i)\n\t\t\t{\n\t\t\t\t++pos;\n\t\t\t\twpos = pos & (windowsize - 1);\n\t\t\t\thashval = getHash(in, insize, pos);\n\t\t\t\tif (usezeros && hashval == 0)\n\t\t\t\t{\n\t\t\t\t\tif (numzeros == 0) numzeros = countZeros(in, insize, pos);\n\t\t\t\t\telse if (pos + numzeros > insize || in[pos + numzeros - 1] != 0) --numzeros;\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tnumzeros = 0;\n\t\t\t\t}\n\t\t\t\tupdateHashChain(hash, wpos, hashval, numzeros);\n\t\t\t}\n\t\t}\n\t} /*end of the loop through each character of input*/\n\n\treturn error;\n}\n\n/* /////////////////////////////////////////////////////////////////////////// */\n\nstatic unsigned deflateNoCompression(ucvector* out, const unsigned char* data, size_t datasize)\n{\n\t/*non compressed deflate block data: 1 bit BFINAL,2 bits BTYPE,(5 bits): it jumps to start of next byte,\n\t2 bytes LEN, 2 bytes NLEN, LEN bytes literal DATA*/\n\n\tsize_t i, j, numdeflateblocks = (datasize + 65534) / 65535;\n\tunsigned datapos = 0;\n\tfor (i = 0; i != numdeflateblocks; ++i)\n\t{\n\t\tunsigned BFINAL, BTYPE, LEN, NLEN;\n\t\tunsigned char firstbyte;\n\n\t\tBFINAL = (i == numdeflateblocks - 1);\n\t\tBTYPE = 0;\n\n\t\tfirstbyte = (unsigned char)(BFINAL + ((BTYPE & 1) << 1) + ((BTYPE & 2) << 1));\n\t\tucvector_push_back(out, firstbyte);\n\n\t\tLEN = 65535;\n\t\tif (datasize - datapos < 65535) LEN = (unsigned)datasize - datapos;\n\t\tNLEN = 65535 - LEN;\n\n\t\tucvector_push_back(out, (unsigned char)(LEN & 255));\n\t\tucvector_push_back(out, (unsigned char)(LEN >> 8));\n\t\tucvector_push_back(out, (unsigned char)(NLEN & 255));\n\t\tucvector_push_back(out, (unsigned char)(NLEN >> 8));\n\n\t\t/*Decompressed data*/\n\t\tfor (j = 0; j < 65535 && datapos < datasize; ++j)\n\t\t{\n\t\t\tucvector_push_back(out, data[datapos++]);\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/*\nwrite the lz77-encoded data, which has lit, len and dist codes, to compressed stream using huffman trees.\ntree_ll: the tree for lit and len codes.\ntree_d: the tree for distance codes.\n*/\nstatic void writeLZ77data(size_t* bp, ucvector* out, const uivector* lz77_encoded,\n\tconst HuffmanTree* tree_ll, const HuffmanTree* tree_d)\n{\n\tsize_t i = 0;\n\tfor (i = 0; i != lz77_encoded->size; ++i)\n\t{\n\t\tunsigned val = lz77_encoded->data[i];\n\t\taddHuffmanSymbol(bp, out, HuffmanTree_getCode(tree_ll, val), HuffmanTree_getLength(tree_ll, val));\n\t\tif (val > 256) /*for a length code, 3 more things have to be added*/\n\t\t{\n\t\t\tunsigned length_index = val - FIRST_LENGTH_CODE_INDEX;\n\t\t\tunsigned n_length_extra_bits = LENGTHEXTRA[length_index];\n\t\t\tunsigned length_extra_bits = lz77_encoded->data[++i];\n\n\t\t\tunsigned distance_code = lz77_encoded->data[++i];\n\n\t\t\tunsigned distance_index = distance_code;\n\t\t\tunsigned n_distance_extra_bits = DISTANCEEXTRA[distance_index];\n\t\t\tunsigned distance_extra_bits = lz77_encoded->data[++i];\n\n\t\t\taddBitsToStream(bp, out, length_extra_bits, n_length_extra_bits);\n\t\t\taddHuffmanSymbol(bp, out, HuffmanTree_getCode(tree_d, distance_code),\n\t\t\t\tHuffmanTree_getLength(tree_d, distance_code));\n\t\t\taddBitsToStream(bp, out, distance_extra_bits, n_distance_extra_bits);\n\t\t}\n\t}\n}\n\n/*Deflate for a block of type \"dynamic\", that is, with freely, optimally, created huffman trees*/\nstatic unsigned deflateDynamic(ucvector* out, size_t* bp, Hash* hash,\n\tconst unsigned char* data, size_t datapos, size_t dataend,\n\tconst LodePNGCompressSettings* settings, unsigned final)\n{\n\tunsigned error = 0;\n\n\t/*\n\tA block is compressed as follows: The PNG data is lz77 encoded, resulting in\n\tliteral bytes and length/distance pairs. This is then huffman compressed with\n\ttwo huffman trees. One huffman tree is used for the lit and len values (\"ll\"),\n\tanother huffman tree is used for the dist values (\"d\"). These two trees are\n\tstored using their code lengths, and to compress even more these code lengths\n\tare also run-length encoded and huffman compressed. This gives a huffman tree\n\tof code lengths \"cl\". The code lenghts used to describe this third tree are\n\tthe code length code lengths (\"clcl\").\n\t*/\n\n\t/*The lz77 encoded data, represented with integers since there will also be length and distance codes in it*/\n\tuivector lz77_encoded;\n\tHuffmanTree tree_ll; /*tree for lit,len values*/\n\tHuffmanTree tree_d; /*tree for distance codes*/\n\tHuffmanTree tree_cl; /*tree for encoding the code lengths representing tree_ll and tree_d*/\n\tuivector frequencies_ll; /*frequency of lit,len codes*/\n\tuivector frequencies_d; /*frequency of dist codes*/\n\tuivector frequencies_cl; /*frequency of code length codes*/\n\tuivector bitlen_lld; /*lit,len,dist code lenghts (int bits), literally (without repeat codes).*/\n\tuivector bitlen_lld_e; /*bitlen_lld encoded with repeat codes (this is a rudemtary run length compression)*/\n\t\t\t\t\t\t   /*bitlen_cl is the code length code lengths (\"clcl\"). The bit lengths of codes to represent tree_cl\n\t\t\t\t\t\t   (these are written as is in the file, it would be crazy to compress these using yet another huffman\n\t\t\t\t\t\t   tree that needs to be represented by yet another set of code lengths)*/\n\tuivector bitlen_cl;\n\tsize_t datasize = dataend - datapos;\n\n\t/*\n\tDue to the huffman compression of huffman tree representations (\"two levels\"), there are some anologies:\n\tbitlen_lld is to tree_cl what data is to tree_ll and tree_d.\n\tbitlen_lld_e is to bitlen_lld what lz77_encoded is to data.\n\tbitlen_cl is to bitlen_lld_e what bitlen_lld is to lz77_encoded.\n\t*/\n\n\tunsigned BFINAL = final;\n\tsize_t numcodes_ll, numcodes_d, i;\n\tunsigned HLIT, HDIST, HCLEN;\n\n\tuivector_init(&lz77_encoded);\n\tHuffmanTree_init(&tree_ll);\n\tHuffmanTree_init(&tree_d);\n\tHuffmanTree_init(&tree_cl);\n\tuivector_init(&frequencies_ll);\n\tuivector_init(&frequencies_d);\n\tuivector_init(&frequencies_cl);\n\tuivector_init(&bitlen_lld);\n\tuivector_init(&bitlen_lld_e);\n\tuivector_init(&bitlen_cl);\n\n\t/*This while loop never loops due to a break at the end, it is here to\n\tallow breaking out of it to the cleanup phase on error conditions.*/\n\twhile (!error)\n\t{\n\t\tif (settings->use_lz77)\n\t\t{\n\t\t\terror = encodeLZ77(&lz77_encoded, hash, data, datapos, dataend, settings->windowsize,\n\t\t\t\tsettings->minmatch, settings->nicematch, settings->lazymatching);\n\t\t\tif (error) break;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (!uivector_resize(&lz77_encoded, datasize)) ERROR_BREAK(83 /*alloc fail*/);\n\t\t\tfor (i = datapos; i < dataend; ++i) lz77_encoded.data[i - datapos] = data[i]; /*no LZ77, but still will be Huffman compressed*/\n\t\t}\n\n\t\tif (!uivector_resizev(&frequencies_ll, 286, 0)) ERROR_BREAK(83 /*alloc fail*/);\n\t\tif (!uivector_resizev(&frequencies_d, 30, 0)) ERROR_BREAK(83 /*alloc fail*/);\n\n\t\t/*Count the frequencies of lit, len and dist codes*/\n\t\tfor (i = 0; i != lz77_encoded.size; ++i)\n\t\t{\n\t\t\tunsigned symbol = lz77_encoded.data[i];\n\t\t\t++frequencies_ll.data[symbol];\n\t\t\tif (symbol > 256)\n\t\t\t{\n\t\t\t\tunsigned dist = lz77_encoded.data[i + 2];\n\t\t\t\t++frequencies_d.data[dist];\n\t\t\t\ti += 3;\n\t\t\t}\n\t\t}\n\t\tfrequencies_ll.data[256] = 1; /*there will be exactly 1 end code, at the end of the block*/\n\n\t\t\t\t\t\t\t\t\t  /*Make both huffman trees, one for the lit and len codes, one for the dist codes*/\n\t\terror = HuffmanTree_makeFromFrequencies(&tree_ll, frequencies_ll.data, 257, frequencies_ll.size, 15);\n\t\tif (error) break;\n\t\t/*2, not 1, is chosen for mincodes: some buggy PNG decoders require at least 2 symbols in the dist tree*/\n\t\terror = HuffmanTree_makeFromFrequencies(&tree_d, frequencies_d.data, 2, frequencies_d.size, 15);\n\t\tif (error) break;\n\n\t\tnumcodes_ll = tree_ll.numcodes; if (numcodes_ll > 286) numcodes_ll = 286;\n\t\tnumcodes_d = tree_d.numcodes; if (numcodes_d > 30) numcodes_d = 30;\n\t\t/*store the code lengths of both generated trees in bitlen_lld*/\n\t\tfor (i = 0; i != numcodes_ll; ++i) uivector_push_back(&bitlen_lld, HuffmanTree_getLength(&tree_ll, (unsigned)i));\n\t\tfor (i = 0; i != numcodes_d; ++i) uivector_push_back(&bitlen_lld, HuffmanTree_getLength(&tree_d, (unsigned)i));\n\n\t\t/*run-length compress bitlen_ldd into bitlen_lld_e by using repeat codes 16 (copy length 3-6 times),\n\t\t17 (3-10 zeroes), 18 (11-138 zeroes)*/\n\t\tfor (i = 0; i != (unsigned)bitlen_lld.size; ++i)\n\t\t{\n\t\t\tunsigned j = 0; /*amount of repititions*/\n\t\t\twhile (i + j + 1 < (unsigned)bitlen_lld.size && bitlen_lld.data[i + j + 1] == bitlen_lld.data[i]) ++j;\n\n\t\t\tif (bitlen_lld.data[i] == 0 && j >= 2) /*repeat code for zeroes*/\n\t\t\t{\n\t\t\t\t++j; /*include the first zero*/\n\t\t\t\tif (j <= 10) /*repeat code 17 supports max 10 zeroes*/\n\t\t\t\t{\n\t\t\t\t\tuivector_push_back(&bitlen_lld_e, 17);\n\t\t\t\t\tuivector_push_back(&bitlen_lld_e, j - 3);\n\t\t\t\t}\n\t\t\t\telse /*repeat code 18 supports max 138 zeroes*/\n\t\t\t\t{\n\t\t\t\t\tif (j > 138) j = 138;\n\t\t\t\t\tuivector_push_back(&bitlen_lld_e, 18);\n\t\t\t\t\tuivector_push_back(&bitlen_lld_e, j - 11);\n\t\t\t\t}\n\t\t\t\ti += (j - 1);\n\t\t\t}\n\t\t\telse if (j >= 3) /*repeat code for value other than zero*/\n\t\t\t{\n\t\t\t\tsize_t k;\n\t\t\t\tunsigned num = j / 6, rest = j % 6;\n\t\t\t\tuivector_push_back(&bitlen_lld_e, bitlen_lld.data[i]);\n\t\t\t\tfor (k = 0; k < num; ++k)\n\t\t\t\t{\n\t\t\t\t\tuivector_push_back(&bitlen_lld_e, 16);\n\t\t\t\t\tuivector_push_back(&bitlen_lld_e, 6 - 3);\n\t\t\t\t}\n\t\t\t\tif (rest >= 3)\n\t\t\t\t{\n\t\t\t\t\tuivector_push_back(&bitlen_lld_e, 16);\n\t\t\t\t\tuivector_push_back(&bitlen_lld_e, rest - 3);\n\t\t\t\t}\n\t\t\t\telse j -= rest;\n\t\t\t\ti += j;\n\t\t\t}\n\t\t\telse /*too short to benefit from repeat code*/\n\t\t\t{\n\t\t\t\tuivector_push_back(&bitlen_lld_e, bitlen_lld.data[i]);\n\t\t\t}\n\t\t}\n\n\t\t/*generate tree_cl, the huffmantree of huffmantrees*/\n\n\t\tif (!uivector_resizev(&frequencies_cl, NUM_CODE_LENGTH_CODES, 0)) ERROR_BREAK(83 /*alloc fail*/);\n\t\tfor (i = 0; i != bitlen_lld_e.size; ++i)\n\t\t{\n\t\t\t++frequencies_cl.data[bitlen_lld_e.data[i]];\n\t\t\t/*after a repeat code come the bits that specify the number of repetitions,\n\t\t\tthose don't need to be in the frequencies_cl calculation*/\n\t\t\tif (bitlen_lld_e.data[i] >= 16) ++i;\n\t\t}\n\n\t\terror = HuffmanTree_makeFromFrequencies(&tree_cl, frequencies_cl.data,\n\t\t\tfrequencies_cl.size, frequencies_cl.size, 7);\n\t\tif (error) break;\n\n\t\tif (!uivector_resize(&bitlen_cl, tree_cl.numcodes)) ERROR_BREAK(83 /*alloc fail*/);\n\t\tfor (i = 0; i != tree_cl.numcodes; ++i)\n\t\t{\n\t\t\t/*lenghts of code length tree is in the order as specified by deflate*/\n\t\t\tbitlen_cl.data[i] = HuffmanTree_getLength(&tree_cl, CLCL_ORDER[i]);\n\t\t}\n\t\twhile (bitlen_cl.data[bitlen_cl.size - 1] == 0 && bitlen_cl.size > 4)\n\t\t{\n\t\t\t/*remove zeros at the end, but minimum size must be 4*/\n\t\t\tif (!uivector_resize(&bitlen_cl, bitlen_cl.size - 1)) ERROR_BREAK(83 /*alloc fail*/);\n\t\t}\n\t\tif (error) break;\n\n\t\t/*\n\t\tWrite everything into the output\n\n\t\tAfter the BFINAL and BTYPE, the dynamic block consists out of the following:\n\t\t- 5 bits HLIT, 5 bits HDIST, 4 bits HCLEN\n\t\t- (HCLEN+4)*3 bits code lengths of code length alphabet\n\t\t- HLIT + 257 code lenghts of lit/length alphabet (encoded using the code length\n\t\talphabet, + possible repetition codes 16, 17, 18)\n\t\t- HDIST + 1 code lengths of distance alphabet (encoded using the code length\n\t\talphabet, + possible repetition codes 16, 17, 18)\n\t\t- compressed data\n\t\t- 256 (end code)\n\t\t*/\n\n\t\t/*Write block type*/\n\t\taddBitToStream(bp, out, BFINAL);\n\t\taddBitToStream(bp, out, 0); /*first bit of BTYPE \"dynamic\"*/\n\t\taddBitToStream(bp, out, 1); /*second bit of BTYPE \"dynamic\"*/\n\n\t\t\t\t\t\t\t\t\t/*write the HLIT, HDIST and HCLEN values*/\n\t\tHLIT = (unsigned)(numcodes_ll - 257);\n\t\tHDIST = (unsigned)(numcodes_d - 1);\n\t\tHCLEN = (unsigned)bitlen_cl.size - 4;\n\t\t/*trim zeroes for HCLEN. HLIT and HDIST were already trimmed at tree creation*/\n\t\twhile (!bitlen_cl.data[HCLEN + 4 - 1] && HCLEN > 0) --HCLEN;\n\t\taddBitsToStream(bp, out, HLIT, 5);\n\t\taddBitsToStream(bp, out, HDIST, 5);\n\t\taddBitsToStream(bp, out, HCLEN, 4);\n\n\t\t/*write the code lenghts of the code length alphabet*/\n\t\tfor (i = 0; i != HCLEN + 4; ++i) addBitsToStream(bp, out, bitlen_cl.data[i], 3);\n\n\t\t/*write the lenghts of the lit/len AND the dist alphabet*/\n\t\tfor (i = 0; i != bitlen_lld_e.size; ++i)\n\t\t{\n\t\t\taddHuffmanSymbol(bp, out, HuffmanTree_getCode(&tree_cl, bitlen_lld_e.data[i]),\n\t\t\t\tHuffmanTree_getLength(&tree_cl, bitlen_lld_e.data[i]));\n\t\t\t/*extra bits of repeat codes*/\n\t\t\tif (bitlen_lld_e.data[i] == 16) addBitsToStream(bp, out, bitlen_lld_e.data[++i], 2);\n\t\t\telse if (bitlen_lld_e.data[i] == 17) addBitsToStream(bp, out, bitlen_lld_e.data[++i], 3);\n\t\t\telse if (bitlen_lld_e.data[i] == 18) addBitsToStream(bp, out, bitlen_lld_e.data[++i], 7);\n\t\t}\n\n\t\t/*write the compressed data symbols*/\n\t\twriteLZ77data(bp, out, &lz77_encoded, &tree_ll, &tree_d);\n\t\t/*error: the length of the end code 256 must be larger than 0*/\n\t\tif (HuffmanTree_getLength(&tree_ll, 256) == 0) ERROR_BREAK(64);\n\n\t\t/*write the end code*/\n\t\taddHuffmanSymbol(bp, out, HuffmanTree_getCode(&tree_ll, 256), HuffmanTree_getLength(&tree_ll, 256));\n\n\t\tbreak; /*end of error-while*/\n\t}\n\n\t/*cleanup*/\n\tuivector_cleanup(&lz77_encoded);\n\tHuffmanTree_cleanup(&tree_ll);\n\tHuffmanTree_cleanup(&tree_d);\n\tHuffmanTree_cleanup(&tree_cl);\n\tuivector_cleanup(&frequencies_ll);\n\tuivector_cleanup(&frequencies_d);\n\tuivector_cleanup(&frequencies_cl);\n\tuivector_cleanup(&bitlen_lld_e);\n\tuivector_cleanup(&bitlen_lld);\n\tuivector_cleanup(&bitlen_cl);\n\n\treturn error;\n}\n\nstatic unsigned deflateFixed(ucvector* out, size_t* bp, Hash* hash,\n\tconst unsigned char* data,\n\tsize_t datapos, size_t dataend,\n\tconst LodePNGCompressSettings* settings, unsigned final)\n{\n\tHuffmanTree tree_ll; /*tree for literal values and length codes*/\n\tHuffmanTree tree_d; /*tree for distance codes*/\n\n\tunsigned BFINAL = final;\n\tunsigned error = 0;\n\tsize_t i;\n\n\tHuffmanTree_init(&tree_ll);\n\tHuffmanTree_init(&tree_d);\n\n\tgenerateFixedLitLenTree(&tree_ll);\n\tgenerateFixedDistanceTree(&tree_d);\n\n\taddBitToStream(bp, out, BFINAL);\n\taddBitToStream(bp, out, 1); /*first bit of BTYPE*/\n\taddBitToStream(bp, out, 0); /*second bit of BTYPE*/\n\n\tif (settings->use_lz77) /*LZ77 encoded*/\n\t{\n\t\tuivector lz77_encoded;\n\t\tuivector_init(&lz77_encoded);\n\t\terror = encodeLZ77(&lz77_encoded, hash, data, datapos, dataend, settings->windowsize,\n\t\t\tsettings->minmatch, settings->nicematch, settings->lazymatching);\n\t\tif (!error) writeLZ77data(bp, out, &lz77_encoded, &tree_ll, &tree_d);\n\t\tuivector_cleanup(&lz77_encoded);\n\t}\n\telse /*no LZ77, but still will be Huffman compressed*/\n\t{\n\t\tfor (i = datapos; i < dataend; ++i)\n\t\t{\n\t\t\taddHuffmanSymbol(bp, out, HuffmanTree_getCode(&tree_ll, data[i]), HuffmanTree_getLength(&tree_ll, data[i]));\n\t\t}\n\t}\n\t/*add END code*/\n\tif (!error) addHuffmanSymbol(bp, out, HuffmanTree_getCode(&tree_ll, 256), HuffmanTree_getLength(&tree_ll, 256));\n\n\t/*cleanup*/\n\tHuffmanTree_cleanup(&tree_ll);\n\tHuffmanTree_cleanup(&tree_d);\n\n\treturn error;\n}\n\nstatic unsigned lodepng_deflatev(ucvector* out, const unsigned char* in, size_t insize,\n\tconst LodePNGCompressSettings* settings)\n{\n\tunsigned error = 0;\n\tsize_t i, blocksize, numdeflateblocks;\n\tsize_t bp = 0; /*the bit pointer*/\n\tHash hash;\n\n\tif (settings->btype > 2) return 61;\n\telse if (settings->btype == 0) return deflateNoCompression(out, in, insize);\n\telse if (settings->btype == 1) blocksize = insize;\n\telse /*if(settings->btype == 2)*/\n\t{\n\t\t/*on PNGs, deflate blocks of 65-262k seem to give most dense encoding*/\n\t\tblocksize = insize / 8 + 8;\n\t\tif (blocksize < 65536) blocksize = 65536;\n\t\tif (blocksize > 262144) blocksize = 262144;\n\t}\n\n\tnumdeflateblocks = (insize + blocksize - 1) / blocksize;\n\tif (numdeflateblocks == 0) numdeflateblocks = 1;\n\n\terror = hash_init(&hash, settings->windowsize);\n\tif (error) return error;\n\n\tfor (i = 0; i != numdeflateblocks && !error; ++i)\n\t{\n\t\tunsigned final = (i == numdeflateblocks - 1);\n\t\tsize_t start = i * blocksize;\n\t\tsize_t end = start + blocksize;\n\t\tif (end > insize) end = insize;\n\n\t\tif (settings->btype == 1) error = deflateFixed(out, &bp, &hash, in, start, end, settings, final);\n\t\telse if (settings->btype == 2) error = deflateDynamic(out, &bp, &hash, in, start, end, settings, final);\n\t}\n\n\thash_cleanup(&hash);\n\n\treturn error;\n}\n\nunsigned lodepng_deflate(unsigned char** out, size_t* outsize,\n\tconst unsigned char* in, size_t insize,\n\tconst LodePNGCompressSettings* settings)\n{\n\tunsigned error;\n\tucvector v;\n\tucvector_init_buffer(&v, *out, *outsize);\n\terror = lodepng_deflatev(&v, in, insize, settings);\n\t*out = v.data;\n\t*outsize = v.size;\n\treturn error;\n}\n\nstatic unsigned deflate(unsigned char** out, size_t* outsize,\n\tconst unsigned char* in, size_t insize,\n\tconst LodePNGCompressSettings* settings)\n{\n\tif (settings->custom_deflate)\n\t{\n\t\treturn settings->custom_deflate(out, outsize, in, insize, settings);\n\t}\n\telse\n\t{\n\t\treturn lodepng_deflate(out, outsize, in, insize, settings);\n\t}\n}\n\n#endif /*LODEPNG_COMPILE_DECODER*/\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* / Adler32                                                                  */\n/* ////////////////////////////////////////////////////////////////////////// */\n\nstatic unsigned update_adler32(unsigned adler, const unsigned char* data, unsigned len)\n{\n\tunsigned s1 = adler & 0xffff;\n\tunsigned s2 = (adler >> 16) & 0xffff;\n\n\twhile (len > 0)\n\t{\n\t\t/*at least 5552 sums can be done before the sums overflow, saving a lot of module divisions*/\n\t\tunsigned amount = len > 5552 ? 5552 : len;\n\t\tlen -= amount;\n\t\twhile (amount > 0)\n\t\t{\n\t\t\ts1 += (*data++);\n\t\t\ts2 += s1;\n\t\t\t--amount;\n\t\t}\n\t\ts1 %= 65521;\n\t\ts2 %= 65521;\n\t}\n\n\treturn (s2 << 16) | s1;\n}\n\n/*Return the adler32 of the bytes data[0..len-1]*/\nstatic unsigned adler32(const unsigned char* data, unsigned len)\n{\n\treturn update_adler32(1L, data, len);\n}\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* / Zlib                                                                   / */\n/* ////////////////////////////////////////////////////////////////////////// */\n\n#ifdef LODEPNG_COMPILE_DECODER\n\nunsigned lodepng_zlib_decompress(unsigned char** out, size_t* outsize, const unsigned char* in,\n\tsize_t insize, const LodePNGDecompressSettings* settings)\n{\n\tunsigned error = 0;\n\tunsigned CM, CINFO, FDICT;\n\n\tif (insize < 2) return 53; /*error, size of zlib data too small*/\n\t\t\t\t\t\t\t   /*read information from zlib header*/\n\tif ((in[0] * 256 + in[1]) % 31 != 0)\n\t{\n\t\t/*error: 256 * in[0] + in[1] must be a multiple of 31, the FCHECK value is supposed to be made that way*/\n\t\treturn 24;\n\t}\n\n\tCM = in[0] & 15;\n\tCINFO = (in[0] >> 4) & 15;\n\t/*FCHECK = in[1] & 31;*/ /*FCHECK is already tested above*/\n\tFDICT = (in[1] >> 5) & 1;\n\t/*FLEVEL = (in[1] >> 6) & 3;*/ /*FLEVEL is not used here*/\n\n\tif (CM != 8 || CINFO > 7)\n\t{\n\t\t/*error: only compression method 8: inflate with sliding window of 32k is supported by the PNG spec*/\n\t\treturn 25;\n\t}\n\tif (FDICT != 0)\n\t{\n\t\t/*error: the specification of PNG says about the zlib stream:\n\t\t\"The additional flags shall not specify a preset dictionary.\"*/\n\t\treturn 26;\n\t}\n\n\terror = inflate(out, outsize, in + 2, insize - 2, settings);\n\tif (error) return error;\n\n\tif (!settings->ignore_adler32)\n\t{\n\t\tunsigned ADLER32 = lodepng_read32bitInt(&in[insize - 4]);\n\t\tunsigned checksum = adler32(*out, (unsigned)(*outsize));\n\t\tif (checksum != ADLER32) return 58; /*error, adler checksum not correct, data must be corrupted*/\n\t}\n\n\treturn 0; /*no error*/\n}\n\nstatic unsigned zlib_decompress(unsigned char** out, size_t* outsize, const unsigned char* in,\n\tsize_t insize, const LodePNGDecompressSettings* settings)\n{\n\tif (settings->custom_zlib)\n\t{\n\t\treturn settings->custom_zlib(out, outsize, in, insize, settings);\n\t}\n\telse\n\t{\n\t\treturn lodepng_zlib_decompress(out, outsize, in, insize, settings);\n\t}\n}\n\n#endif /*LODEPNG_COMPILE_DECODER*/\n\n#ifdef LODEPNG_COMPILE_ENCODER\n\nunsigned lodepng_zlib_compress(unsigned char** out, size_t* outsize, const unsigned char* in,\n\tsize_t insize, const LodePNGCompressSettings* settings)\n{\n\t/*initially, *out must be NULL and outsize 0, if you just give some random *out\n\tthat's pointing to a non allocated buffer, this'll crash*/\n\tucvector outv;\n\tsize_t i;\n\tunsigned error;\n\tunsigned char* deflatedata = 0;\n\tsize_t deflatesize = 0;\n\n\t/*zlib data: 1 byte CMF (CM+CINFO), 1 byte FLG, deflate data, 4 byte ADLER32 checksum of the Decompressed data*/\n\tunsigned CMF = 120; /*0b01111000: CM 8, CINFO 7. With CINFO 7, any window size up to 32768 can be used.*/\n\tunsigned FLEVEL = 0;\n\tunsigned FDICT = 0;\n\tunsigned CMFFLG = 256 * CMF + FDICT * 32 + FLEVEL * 64;\n\tunsigned FCHECK = 31 - CMFFLG % 31;\n\tCMFFLG += FCHECK;\n\n\t/*ucvector-controlled version of the output buffer, for dynamic array*/\n\tucvector_init_buffer(&outv, *out, *outsize);\n\n\tucvector_push_back(&outv, (unsigned char)(CMFFLG >> 8));\n\tucvector_push_back(&outv, (unsigned char)(CMFFLG & 255));\n\n\terror = deflate(&deflatedata, &deflatesize, in, insize, settings);\n\n\tif (!error)\n\t{\n\t\tunsigned ADLER32 = adler32(in, (unsigned)insize);\n\t\tfor (i = 0; i != deflatesize; ++i) ucvector_push_back(&outv, deflatedata[i]);\n\t\tlodepng_free(deflatedata);\n\t\tlodepng_add32bitInt(&outv, ADLER32);\n\t}\n\n\t*out = outv.data;\n\t*outsize = outv.size;\n\n\treturn error;\n}\n\n/* compress using the default or custom zlib function */\nstatic unsigned zlib_compress(unsigned char** out, size_t* outsize, const unsigned char* in,\n\tsize_t insize, const LodePNGCompressSettings* settings)\n{\n\tif (settings->custom_zlib)\n\t{\n\t\treturn settings->custom_zlib(out, outsize, in, insize, settings);\n\t}\n\telse\n\t{\n\t\treturn lodepng_zlib_compress(out, outsize, in, insize, settings);\n\t}\n}\n\n#endif /*LODEPNG_COMPILE_ENCODER*/\n\n#else /*no LODEPNG_COMPILE_ZLIB*/\n\n#ifdef LODEPNG_COMPILE_DECODER\nstatic unsigned zlib_decompress(unsigned char** out, size_t* outsize, const unsigned char* in,\n\tsize_t insize, const LodePNGDecompressSettings* settings)\n{\n\tif (!settings->custom_zlib) return 87; /*no custom zlib function provided */\n\treturn settings->custom_zlib(out, outsize, in, insize, settings);\n}\n#endif /*LODEPNG_COMPILE_DECODER*/\n#ifdef LODEPNG_COMPILE_ENCODER\nstatic unsigned zlib_compress(unsigned char** out, size_t* outsize, const unsigned char* in,\n\tsize_t insize, const LodePNGCompressSettings* settings)\n{\n\tif (!settings->custom_zlib) return 87; /*no custom zlib function provided */\n\treturn settings->custom_zlib(out, outsize, in, insize, settings);\n}\n#endif /*LODEPNG_COMPILE_ENCODER*/\n\n#endif /*LODEPNG_COMPILE_ZLIB*/\n\n/* ////////////////////////////////////////////////////////////////////////// */\n\n#ifdef LODEPNG_COMPILE_ENCODER\n\n/*this is a good tradeoff between speed and compression ratio*/\n#define DEFAULT_WINDOWSIZE 2048\n\nvoid lodepng_compress_settings_init(LodePNGCompressSettings* settings)\n{\n\t/*compress with dynamic huffman tree (not in the mathematical sense, just not the predefined one)*/\n\tsettings->btype = 2;\n\tsettings->use_lz77 = 1;\n\tsettings->windowsize = DEFAULT_WINDOWSIZE;\n\tsettings->minmatch = 3;\n\tsettings->nicematch = 128;\n\tsettings->lazymatching = 1;\n\n\tsettings->custom_zlib = 0;\n\tsettings->custom_deflate = 0;\n\tsettings->custom_context = 0;\n}\n\nconst LodePNGCompressSettings lodepng_default_compress_settings = { 2, 1, DEFAULT_WINDOWSIZE, 3, 128, 1, 0, 0, 0 };\n\n\n#endif /*LODEPNG_COMPILE_ENCODER*/\n\n#ifdef LODEPNG_COMPILE_DECODER\n\nvoid lodepng_decompress_settings_init(LodePNGDecompressSettings* settings)\n{\n\tsettings->ignore_adler32 = 0;\n\n\tsettings->custom_zlib = 0;\n\tsettings->custom_inflate = 0;\n\tsettings->custom_context = 0;\n}\n\nconst LodePNGDecompressSettings lodepng_default_decompress_settings = { 0, 0, 0, 0 };\n\n#endif /*LODEPNG_COMPILE_DECODER*/\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* ////////////////////////////////////////////////////////////////////////// */\n/* // End of Zlib related code. Begin of PNG related code.                 // */\n/* ////////////////////////////////////////////////////////////////////////// */\n/* ////////////////////////////////////////////////////////////////////////// */\n\n#ifdef LODEPNG_COMPILE_PNG\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* / CRC32                                                                  / */\n/* ////////////////////////////////////////////////////////////////////////// */\n\n\n#ifndef LODEPNG_NO_COMPILE_CRC\n/* CRC polynomial: 0xedb88320 */\nstatic unsigned lodepng_crc32_table[256] = {\n\t0u, 1996959894u, 3993919788u, 2567524794u,  124634137u, 1886057615u, 3915621685u, 2657392035u,\n\t249268274u, 2044508324u, 3772115230u, 2547177864u,  162941995u, 2125561021u, 3887607047u, 2428444049u,\n\t498536548u, 1789927666u, 4089016648u, 2227061214u,  450548861u, 1843258603u, 4107580753u, 2211677639u,\n\t325883990u, 1684777152u, 4251122042u, 2321926636u,  335633487u, 1661365465u, 4195302755u, 2366115317u,\n\t997073096u, 1281953886u, 3579855332u, 2724688242u, 1006888145u, 1258607687u, 3524101629u, 2768942443u,\n\t901097722u, 1119000684u, 3686517206u, 2898065728u,  853044451u, 1172266101u, 3705015759u, 2882616665u,\n\t651767980u, 1373503546u, 3369554304u, 3218104598u,  565507253u, 1454621731u, 3485111705u, 3099436303u,\n\t671266974u, 1594198024u, 3322730930u, 2970347812u,  795835527u, 1483230225u, 3244367275u, 3060149565u,\n\t1994146192u,   31158534u, 2563907772u, 4023717930u, 1907459465u,  112637215u, 2680153253u, 3904427059u,\n\t2013776290u,  251722036u, 2517215374u, 3775830040u, 2137656763u,  141376813u, 2439277719u, 3865271297u,\n\t1802195444u,  476864866u, 2238001368u, 4066508878u, 1812370925u,  453092731u, 2181625025u, 4111451223u,\n\t1706088902u,  314042704u, 2344532202u, 4240017532u, 1658658271u,  366619977u, 2362670323u, 4224994405u,\n\t1303535960u,  984961486u, 2747007092u, 3569037538u, 1256170817u, 1037604311u, 2765210733u, 3554079995u,\n\t1131014506u,  879679996u, 2909243462u, 3663771856u, 1141124467u,  855842277u, 2852801631u, 3708648649u,\n\t1342533948u,  654459306u, 3188396048u, 3373015174u, 1466479909u,  544179635u, 3110523913u, 3462522015u,\n\t1591671054u,  702138776u, 2966460450u, 3352799412u, 1504918807u,  783551873u, 3082640443u, 3233442989u,\n\t3988292384u, 2596254646u,   62317068u, 1957810842u, 3939845945u, 2647816111u,   81470997u, 1943803523u,\n\t3814918930u, 2489596804u,  225274430u, 2053790376u, 3826175755u, 2466906013u,  167816743u, 2097651377u,\n\t4027552580u, 2265490386u,  503444072u, 1762050814u, 4150417245u, 2154129355u,  426522225u, 1852507879u,\n\t4275313526u, 2312317920u,  282753626u, 1742555852u, 4189708143u, 2394877945u,  397917763u, 1622183637u,\n\t3604390888u, 2714866558u,  953729732u, 1340076626u, 3518719985u, 2797360999u, 1068828381u, 1219638859u,\n\t3624741850u, 2936675148u,  906185462u, 1090812512u, 3747672003u, 2825379669u,  829329135u, 1181335161u,\n\t3412177804u, 3160834842u,  628085408u, 1382605366u, 3423369109u, 3138078467u,  570562233u, 1426400815u,\n\t3317316542u, 2998733608u,  733239954u, 1555261956u, 3268935591u, 3050360625u,  752459403u, 1541320221u,\n\t2607071920u, 3965973030u, 1969922972u,   40735498u, 2617837225u, 3943577151u, 1913087877u,   83908371u,\n\t2512341634u, 3803740692u, 2075208622u,  213261112u, 2463272603u, 3855990285u, 2094854071u,  198958881u,\n\t2262029012u, 4057260610u, 1759359992u,  534414190u, 2176718541u, 4139329115u, 1873836001u,  414664567u,\n\t2282248934u, 4279200368u, 1711684554u,  285281116u, 2405801727u, 4167216745u, 1634467795u,  376229701u,\n\t2685067896u, 3608007406u, 1308918612u,  956543938u, 2808555105u, 3495958263u, 1231636301u, 1047427035u,\n\t2932959818u, 3654703836u, 1088359270u,  936918000u, 2847714899u, 3736837829u, 1202900863u,  817233897u,\n\t3183342108u, 3401237130u, 1404277552u,  615818150u, 3134207493u, 3453421203u, 1423857449u,  601450431u,\n\t3009837614u, 3294710456u, 1567103746u,  711928724u, 3020668471u, 3272380065u, 1510334235u,  755167117u\n};\n\n/*Return the CRC of the bytes buf[0..len-1].*/\nunsigned lodepng_crc32(const unsigned char* data, size_t length)\n{\n\tunsigned r = 0xffffffffu;\n\tsize_t i;\n\tfor (i = 0; i < length; ++i)\n\t{\n\t\tr = lodepng_crc32_table[(r ^ data[i]) & 0xff] ^ (r >> 8);\n\t}\n\treturn r ^ 0xffffffffu;\n}\n#else /* !LODEPNG_NO_COMPILE_CRC */\nunsigned lodepng_crc32(const unsigned char* data, size_t length);\n#endif /* !LODEPNG_NO_COMPILE_CRC */\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* / Reading and writing single bits and bytes from/to stream for LodePNG   / */\n/* ////////////////////////////////////////////////////////////////////////// */\n\nstatic unsigned char readBitFromReversedStream(size_t* bitpointer, const unsigned char* bitstream)\n{\n\tunsigned char result = (unsigned char)((bitstream[(*bitpointer) >> 3] >> (7 - ((*bitpointer) & 0x7))) & 1);\n\t++(*bitpointer);\n\treturn result;\n}\n\nstatic unsigned readBitsFromReversedStream(size_t* bitpointer, const unsigned char* bitstream, size_t nbits)\n{\n\tunsigned result = 0;\n\tsize_t i;\n\tfor (i = 0; i < nbits; ++i)\n\t{\n\t\tresult <<= 1;\n\t\tresult |= (unsigned)readBitFromReversedStream(bitpointer, bitstream);\n\t}\n\treturn result;\n}\n\n#ifdef LODEPNG_COMPILE_DECODER\nstatic void setBitOfReversedStream0(size_t* bitpointer, unsigned char* bitstream, unsigned char bit)\n{\n\t/*the current bit in bitstream must be 0 for this to work*/\n\tif (bit)\n\t{\n\t\t/*earlier bit of huffman code is in a lesser significant bit of an earlier byte*/\n\t\tbitstream[(*bitpointer) >> 3] |= (bit << (7 - ((*bitpointer) & 0x7)));\n\t}\n\t++(*bitpointer);\n}\n#endif /*LODEPNG_COMPILE_DECODER*/\n\nstatic void setBitOfReversedStream(size_t* bitpointer, unsigned char* bitstream, unsigned char bit)\n{\n\t/*the current bit in bitstream may be 0 or 1 for this to work*/\n\tif (bit == 0) bitstream[(*bitpointer) >> 3] &= (unsigned char)(~(1 << (7 - ((*bitpointer) & 0x7))));\n\telse         bitstream[(*bitpointer) >> 3] |= (1 << (7 - ((*bitpointer) & 0x7)));\n\t++(*bitpointer);\n}\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* / PNG chunks                                                             / */\n/* ////////////////////////////////////////////////////////////////////////// */\n\nunsigned lodepng_chunk_length(const unsigned char* chunk)\n{\n\treturn lodepng_read32bitInt(&chunk[0]);\n}\n\nvoid lodepng_chunk_type(char type[5], const unsigned char* chunk)\n{\n\tunsigned i;\n\tfor (i = 0; i != 4; ++i) type[i] = (char)chunk[4 + i];\n\ttype[4] = 0; /*null termination char*/\n}\n\nunsigned char lodepng_chunk_type_equals(const unsigned char* chunk, const char* type)\n{\n\tif (strlen(type) != 4) return 0;\n\treturn (chunk[4] == type[0] && chunk[5] == type[1] && chunk[6] == type[2] && chunk[7] == type[3]);\n}\n\nunsigned char lodepng_chunk_ancillary(const unsigned char* chunk)\n{\n\treturn((chunk[4] & 32) != 0);\n}\n\nunsigned char lodepng_chunk_private(const unsigned char* chunk)\n{\n\treturn((chunk[6] & 32) != 0);\n}\n\nunsigned char lodepng_chunk_safetocopy(const unsigned char* chunk)\n{\n\treturn((chunk[7] & 32) != 0);\n}\n\nunsigned char* lodepng_chunk_data(unsigned char* chunk)\n{\n\treturn &chunk[8];\n}\n\nconst unsigned char* lodepng_chunk_data_const(const unsigned char* chunk)\n{\n\treturn &chunk[8];\n}\n\nunsigned lodepng_chunk_check_crc(const unsigned char* chunk)\n{\n\tunsigned length = lodepng_chunk_length(chunk);\n\tunsigned CRC = lodepng_read32bitInt(&chunk[length + 8]);\n\t/*the CRC is taken of the data and the 4 chunk type letters, not the length*/\n\tunsigned checksum = lodepng_crc32(&chunk[4], length + 4);\n\tif (CRC != checksum) return 1;\n\telse return 0;\n}\n\nvoid lodepng_chunk_generate_crc(unsigned char* chunk)\n{\n\tunsigned length = lodepng_chunk_length(chunk);\n\tunsigned CRC = lodepng_crc32(&chunk[4], length + 4);\n\tlodepng_set32bitInt(chunk + 8 + length, CRC);\n}\n\nunsigned char* lodepng_chunk_next(unsigned char* chunk)\n{\n\tunsigned total_chunk_length = lodepng_chunk_length(chunk) + 12;\n\treturn chunk + total_chunk_length;\n}\n\nconst unsigned char* lodepng_chunk_next_const(const unsigned char* chunk)\n{\n\tunsigned total_chunk_length = lodepng_chunk_length(chunk) + 12;\n\treturn chunk + total_chunk_length;\n}\n\nunsigned lodepng_chunk_append(unsigned char** out, size_t* outlength, const unsigned char* chunk)\n{\n\tunsigned i;\n\tunsigned total_chunk_length = lodepng_chunk_length(chunk) + 12;\n\tunsigned char *chunk_start, *new_buffer;\n\tsize_t new_length = (*outlength) + total_chunk_length;\n\tif (new_length < total_chunk_length || new_length < (*outlength)) return 77; /*integer overflow happened*/\n\n\tnew_buffer = (unsigned char*)lodepng_realloc(*out, new_length);\n\tif (!new_buffer) return 83; /*alloc fail*/\n\t(*out) = new_buffer;\n\t(*outlength) = new_length;\n\tchunk_start = &(*out)[new_length - total_chunk_length];\n\n\tfor (i = 0; i != total_chunk_length; ++i) chunk_start[i] = chunk[i];\n\n\treturn 0;\n}\n\nunsigned lodepng_chunk_create(unsigned char** out, size_t* outlength, unsigned length,\n\tconst char* type, const unsigned char* data)\n{\n\tunsigned i;\n\tunsigned char *chunk, *new_buffer;\n\tsize_t new_length = (*outlength) + length + 12;\n\tif (new_length < length + 12 || new_length < (*outlength)) return 77; /*integer overflow happened*/\n\tnew_buffer = (unsigned char*)lodepng_realloc(*out, new_length);\n\tif (!new_buffer) return 83; /*alloc fail*/\n\t(*out) = new_buffer;\n\t(*outlength) = new_length;\n\tchunk = &(*out)[(*outlength) - length - 12];\n\n\t/*1: length*/\n\tlodepng_set32bitInt(chunk, (unsigned)length);\n\n\t/*2: chunk name (4 letters)*/\n\tchunk[4] = (unsigned char)type[0];\n\tchunk[5] = (unsigned char)type[1];\n\tchunk[6] = (unsigned char)type[2];\n\tchunk[7] = (unsigned char)type[3];\n\n\t/*3: the data*/\n\tfor (i = 0; i != length; ++i) chunk[8 + i] = data[i];\n\n\t/*4: CRC (of the chunkname characters and the data)*/\n\tlodepng_chunk_generate_crc(chunk);\n\n\treturn 0;\n}\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* / Color types and such                                                   / */\n/* ////////////////////////////////////////////////////////////////////////// */\n\n/*return type is a LodePNG error code*/\nstatic unsigned checkColorValidity(LodePNGColorType colortype, unsigned bd) /*bd = bitdepth*/\n{\n\tswitch (colortype)\n\t{\n\tcase 0: if (!(bd == 1 || bd == 2 || bd == 4 || bd == 8 || bd == 16)) return 37; break; /*grey*/\n\tcase 2: if (!(bd == 8 || bd == 16)) return 37; break; /*RGB*/\n\tcase 3: if (!(bd == 1 || bd == 2 || bd == 4 || bd == 8)) return 37; break; /*palette*/\n\tcase 4: if (!(bd == 8 || bd == 16)) return 37; break; /*grey + alpha*/\n\tcase 6: if (!(bd == 8 || bd == 16)) return 37; break; /*RGBA*/\n\tdefault: return 31;\n\t}\n\treturn 0; /*allowed color type / bits combination*/\n}\n\nstatic unsigned getNumColorChannels(LodePNGColorType colortype)\n{\n\tswitch (colortype)\n\t{\n\tcase 0: return 1; /*grey*/\n\tcase 2: return 3; /*RGB*/\n\tcase 3: return 1; /*palette*/\n\tcase 4: return 2; /*grey + alpha*/\n\tcase 6: return 4; /*RGBA*/\n\t}\n\treturn 0; /*unexisting color type*/\n}\n\nstatic unsigned lodepng_get_bpp_lct(LodePNGColorType colortype, unsigned bitdepth)\n{\n\t/*bits per pixel is amount of channels * bits per channel*/\n\treturn getNumColorChannels(colortype) * bitdepth;\n}\n\n/* ////////////////////////////////////////////////////////////////////////// */\n\nvoid lodepng_color_mode_init(LodePNGColorMode* info)\n{\n\tinfo->key_defined = 0;\n\tinfo->key_r = info->key_g = info->key_b = 0;\n\tinfo->colortype = LCT_RGBA;\n\tinfo->bitdepth = 8;\n\tinfo->palette = 0;\n\tinfo->palettesize = 0;\n}\n\nvoid lodepng_color_mode_cleanup(LodePNGColorMode* info)\n{\n\tlodepng_palette_clear(info);\n}\n\nunsigned lodepng_color_mode_copy(LodePNGColorMode* dest, const LodePNGColorMode* source)\n{\n\tsize_t i;\n\tlodepng_color_mode_cleanup(dest);\n\t*dest = *source;\n\tif (source->palette)\n\t{\n\t\tdest->palette = (unsigned char*)lodepng_malloc(1024);\n\t\tif (!dest->palette && source->palettesize) return 83; /*alloc fail*/\n\t\tfor (i = 0; i != source->palettesize * 4; ++i) dest->palette[i] = source->palette[i];\n\t}\n\treturn 0;\n}\n\nstatic int lodepng_color_mode_equal(const LodePNGColorMode* a, const LodePNGColorMode* b)\n{\n\tsize_t i;\n\tif (a->colortype != b->colortype) return 0;\n\tif (a->bitdepth != b->bitdepth) return 0;\n\tif (a->key_defined != b->key_defined) return 0;\n\tif (a->key_defined)\n\t{\n\t\tif (a->key_r != b->key_r) return 0;\n\t\tif (a->key_g != b->key_g) return 0;\n\t\tif (a->key_b != b->key_b) return 0;\n\t}\n\tif (a->palettesize != b->palettesize) return 0;\n\tfor (i = 0; i != a->palettesize * 4; ++i)\n\t{\n\t\tif (a->palette[i] != b->palette[i]) return 0;\n\t}\n\treturn 1;\n}\n\n#ifdef LODEPNG_COMPILE_ENCODER\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n/* Makes a temporary LodePNGColorMode that does not need cleanup (no palette) */\nstatic LodePNGColorMode lodepng_color_mode_make(LodePNGColorType colortype, unsigned bitdepth)\n{\n\tLodePNGColorMode result;\n\tlodepng_color_mode_init(&result);\n\tresult.colortype = colortype;\n\tresult.bitdepth = bitdepth;\n\treturn result;\n}\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n#endif /*LODEPNG_COMPILE_ENCODER*/\n\nvoid lodepng_palette_clear(LodePNGColorMode* info)\n{\n\tif (info->palette) lodepng_free(info->palette);\n\tinfo->palette = 0;\n\tinfo->palettesize = 0;\n}\n\nunsigned lodepng_palette_add(LodePNGColorMode* info,\n\tunsigned char r, unsigned char g, unsigned char b, unsigned char a)\n{\n\tunsigned char* data;\n\t/*the same resize technique as C++ std::vectors is used, and here it's made so that for a palette with\n\tthe max of 256 colors, it'll have the exact alloc size*/\n\tif (!info->palette) /*allocate palette if empty*/\n\t{\n\t\t/*room for 256 colors with 4 bytes each*/\n\t\tdata = (unsigned char*)lodepng_realloc(info->palette, 1024);\n\t\tif (!data) return 83; /*alloc fail*/\n\t\telse info->palette = data;\n\t}\n\tinfo->palette[4 * info->palettesize + 0] = r;\n\tinfo->palette[4 * info->palettesize + 1] = g;\n\tinfo->palette[4 * info->palettesize + 2] = b;\n\tinfo->palette[4 * info->palettesize + 3] = a;\n\t++info->palettesize;\n\treturn 0;\n}\n\n/*calculate bits per pixel out of colortype and bitdepth*/\nunsigned lodepng_get_bpp(const LodePNGColorMode* info)\n{\n\treturn lodepng_get_bpp_lct(info->colortype, info->bitdepth);\n}\n\nunsigned lodepng_get_channels(const LodePNGColorMode* info)\n{\n\treturn getNumColorChannels(info->colortype);\n}\n\nunsigned lodepng_is_greyscale_type(const LodePNGColorMode* info)\n{\n\treturn info->colortype == LCT_GREY || info->colortype == LCT_GREY_ALPHA;\n}\n\nunsigned lodepng_is_alpha_type(const LodePNGColorMode* info)\n{\n\treturn (info->colortype & 4) != 0; /*4 or 6*/\n}\n\nunsigned lodepng_is_palette_type(const LodePNGColorMode* info)\n{\n\treturn info->colortype == LCT_PALETTE;\n}\n\nunsigned lodepng_has_palette_alpha(const LodePNGColorMode* info)\n{\n\tsize_t i;\n\tfor (i = 0; i != info->palettesize; ++i)\n\t{\n\t\tif (info->palette[i * 4 + 3] < 255) return 1;\n\t}\n\treturn 0;\n}\n\nunsigned lodepng_can_have_alpha(const LodePNGColorMode* info)\n{\n\treturn info->key_defined\n\t\t|| lodepng_is_alpha_type(info)\n\t\t|| lodepng_has_palette_alpha(info);\n}\n\nsize_t lodepng_get_raw_size_lct(unsigned w, unsigned h, LodePNGColorType colortype, unsigned bitdepth)\n{\n\tsize_t bpp = lodepng_get_bpp_lct(colortype, bitdepth);\n\tsize_t n = (size_t)w * (size_t)h;\n\treturn ((n / 8) * bpp) + ((n & 7) * bpp + 7) / 8;\n}\n\nsize_t lodepng_get_raw_size(unsigned w, unsigned h, const LodePNGColorMode* color)\n{\n\treturn lodepng_get_raw_size_lct(w, h, color->colortype, color->bitdepth);\n}\n\n\n#ifdef LODEPNG_COMPILE_PNG\n#ifdef LODEPNG_COMPILE_DECODER\n\n/*in an idat chunk, each scanline is a multiple of 8 bits, unlike the lodepng output buffer,\nand in addition has one extra byte per line: the filter byte. So this gives a larger\nresult than lodepng_get_raw_size. */\nstatic size_t lodepng_get_raw_size_idat(unsigned w, unsigned h, const LodePNGColorMode* color)\n{\n\tsize_t bpp = lodepng_get_bpp(color);\n\t/* + 1 for the filter byte, and possibly plus padding bits per line */\n\tsize_t line = ((size_t)(w / 8) * bpp) + 1 + ((w & 7) * bpp + 7) / 8;\n\treturn (size_t)h * line;\n}\n\n/* Safely check if multiplying two integers will overflow (no undefined\nbehavior, compiler removing the code, etc...) and output result. */\nstatic int lodepng_mulofl(size_t a, size_t b, size_t* result)\n{\n\t*result = a * b; /* Unsigned multiplication is well defined and safe in C90 */\n\treturn (a != 0 && *result / a != b);\n}\n\n/* Safely check if adding two integers will overflow (no undefined\nbehavior, compiler removing the code, etc...) and output result. */\nstatic int lodepng_addofl(size_t a, size_t b, size_t* result)\n{\n\t*result = a + b; /* Unsigned addition is well defined and safe in C90 */\n\treturn *result < a;\n}\n\n/*Safely checks whether size_t overflow can be caused due to amount of pixels.\nThis check is overcautious rather than precise. If this check indicates no overflow,\nyou can safely compute in a size_t (but not an unsigned):\n-(size_t)w * (size_t)h * 8\n-amount of bytes in IDAT (including filter, padding and Adam7 bytes)\n-amount of bytes in raw color model\nReturns 1 if overflow possible, 0 if not.\n*/\nstatic int lodepng_pixel_overflow(unsigned w, unsigned h,\n\tconst LodePNGColorMode* pngcolor, const LodePNGColorMode* rawcolor)\n{\n\tsize_t bpp = LODEPNG_MAX(lodepng_get_bpp(pngcolor), lodepng_get_bpp(rawcolor));\n\tsize_t numpixels, total;\n\tsize_t line; /* bytes per line in worst case */\n\n\tif (lodepng_mulofl((size_t)w, (size_t)h, &numpixels)) return 1;\n\tif (lodepng_mulofl(numpixels, 8, &total)) return 1; /* bit pointer with 8-bit color, or 8 bytes per channel color */\n\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t/* Bytes per scanline with the expression \"(w / 8) * bpp) + ((w & 7) * bpp + 7) / 8\" */\n\tif (lodepng_mulofl((size_t)(w / 8), bpp, &line)) return 1;\n\tif (lodepng_addofl(line, ((w & 7) * bpp + 7) / 8, &line)) return 1;\n\n\tif (lodepng_addofl(line, 5, &line)) return 1; /* 5 bytes overhead per line: 1 filterbyte, 4 for Adam7 worst case */\n\tif (lodepng_mulofl(line, h, &total)) return 1; /* Total bytes in worst case */\n\n\treturn 0; /* no overflow */\n}\n#endif /*LODEPNG_COMPILE_DECODER*/\n#endif /*LODEPNG_COMPILE_PNG*/\n\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\nstatic void LodePNGUnknownChunks_init(LodePNGInfo* info)\n{\n\tunsigned i;\n\tfor (i = 0; i != 3; ++i) info->unknown_chunks_data[i] = 0;\n\tfor (i = 0; i != 3; ++i) info->unknown_chunks_size[i] = 0;\n}\n\nstatic void LodePNGUnknownChunks_cleanup(LodePNGInfo* info)\n{\n\tunsigned i;\n\tfor (i = 0; i != 3; ++i) lodepng_free(info->unknown_chunks_data[i]);\n}\n\nstatic unsigned LodePNGUnknownChunks_copy(LodePNGInfo* dest, const LodePNGInfo* src)\n{\n\tunsigned i;\n\n\tLodePNGUnknownChunks_cleanup(dest);\n\n\tfor (i = 0; i != 3; ++i)\n\t{\n\t\tsize_t j;\n\t\tdest->unknown_chunks_size[i] = src->unknown_chunks_size[i];\n\t\tdest->unknown_chunks_data[i] = (unsigned char*)lodepng_malloc(src->unknown_chunks_size[i]);\n\t\tif (!dest->unknown_chunks_data[i] && dest->unknown_chunks_size[i]) return 83; /*alloc fail*/\n\t\tfor (j = 0; j < src->unknown_chunks_size[i]; ++j)\n\t\t{\n\t\t\tdest->unknown_chunks_data[i][j] = src->unknown_chunks_data[i][j];\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/******************************************************************************/\n\nstatic void LodePNGText_init(LodePNGInfo* info)\n{\n\tinfo->text_num = 0;\n\tinfo->text_keys = NULL;\n\tinfo->text_strings = NULL;\n}\n\nstatic void LodePNGText_cleanup(LodePNGInfo* info)\n{\n\tsize_t i;\n\tfor (i = 0; i != info->text_num; ++i)\n\t{\n\t\tstring_cleanup(&info->text_keys[i]);\n\t\tstring_cleanup(&info->text_strings[i]);\n\t}\n\tlodepng_free(info->text_keys);\n\tlodepng_free(info->text_strings);\n}\n\nstatic unsigned LodePNGText_copy(LodePNGInfo* dest, const LodePNGInfo* source)\n{\n\tsize_t i = 0;\n\tdest->text_keys = 0;\n\tdest->text_strings = 0;\n\tdest->text_num = 0;\n\tfor (i = 0; i != source->text_num; ++i)\n\t{\n\t\tCERROR_TRY_RETURN(lodepng_add_text(dest, source->text_keys[i], source->text_strings[i]));\n\t}\n\treturn 0;\n}\n\nvoid lodepng_clear_text(LodePNGInfo* info)\n{\n\tLodePNGText_cleanup(info);\n}\n\nunsigned lodepng_add_text(LodePNGInfo* info, const char* key, const char* str)\n{\n\tchar** new_keys = (char**)(lodepng_realloc(info->text_keys, sizeof(char*) * (info->text_num + 1)));\n\tchar** new_strings = (char**)(lodepng_realloc(info->text_strings, sizeof(char*) * (info->text_num + 1)));\n\tif (!new_keys || !new_strings)\n\t{\n\t\tlodepng_free(new_keys);\n\t\tlodepng_free(new_strings);\n\t\treturn 83; /*alloc fail*/\n\t}\n\n\t++info->text_num;\n\tinfo->text_keys = new_keys;\n\tinfo->text_strings = new_strings;\n\n\tinfo->text_keys[info->text_num - 1] = alloc_string(key);\n\tinfo->text_strings[info->text_num - 1] = alloc_string(str);\n\n\treturn 0;\n}\n\n/******************************************************************************/\n\nstatic void LodePNGIText_init(LodePNGInfo* info)\n{\n\tinfo->itext_num = 0;\n\tinfo->itext_keys = NULL;\n\tinfo->itext_langtags = NULL;\n\tinfo->itext_transkeys = NULL;\n\tinfo->itext_strings = NULL;\n}\n\nstatic void LodePNGIText_cleanup(LodePNGInfo* info)\n{\n\tsize_t i;\n\tfor (i = 0; i != info->itext_num; ++i)\n\t{\n\t\tstring_cleanup(&info->itext_keys[i]);\n\t\tstring_cleanup(&info->itext_langtags[i]);\n\t\tstring_cleanup(&info->itext_transkeys[i]);\n\t\tstring_cleanup(&info->itext_strings[i]);\n\t}\n\tlodepng_free(info->itext_keys);\n\tlodepng_free(info->itext_langtags);\n\tlodepng_free(info->itext_transkeys);\n\tlodepng_free(info->itext_strings);\n}\n\nstatic unsigned LodePNGIText_copy(LodePNGInfo* dest, const LodePNGInfo* source)\n{\n\tsize_t i = 0;\n\tdest->itext_keys = 0;\n\tdest->itext_langtags = 0;\n\tdest->itext_transkeys = 0;\n\tdest->itext_strings = 0;\n\tdest->itext_num = 0;\n\tfor (i = 0; i != source->itext_num; ++i)\n\t{\n\t\tCERROR_TRY_RETURN(lodepng_add_itext(dest, source->itext_keys[i], source->itext_langtags[i],\n\t\t\tsource->itext_transkeys[i], source->itext_strings[i]));\n\t}\n\treturn 0;\n}\n\nvoid lodepng_clear_itext(LodePNGInfo* info)\n{\n\tLodePNGIText_cleanup(info);\n}\n\nunsigned lodepng_add_itext(LodePNGInfo* info, const char* key, const char* langtag,\n\tconst char* transkey, const char* str)\n{\n\tchar** new_keys = (char**)(lodepng_realloc(info->itext_keys, sizeof(char*) * (info->itext_num + 1)));\n\tchar** new_langtags = (char**)(lodepng_realloc(info->itext_langtags, sizeof(char*) * (info->itext_num + 1)));\n\tchar** new_transkeys = (char**)(lodepng_realloc(info->itext_transkeys, sizeof(char*) * (info->itext_num + 1)));\n\tchar** new_strings = (char**)(lodepng_realloc(info->itext_strings, sizeof(char*) * (info->itext_num + 1)));\n\tif (!new_keys || !new_langtags || !new_transkeys || !new_strings)\n\t{\n\t\tlodepng_free(new_keys);\n\t\tlodepng_free(new_langtags);\n\t\tlodepng_free(new_transkeys);\n\t\tlodepng_free(new_strings);\n\t\treturn 83; /*alloc fail*/\n\t}\n\n\t++info->itext_num;\n\tinfo->itext_keys = new_keys;\n\tinfo->itext_langtags = new_langtags;\n\tinfo->itext_transkeys = new_transkeys;\n\tinfo->itext_strings = new_strings;\n\n\tinfo->itext_keys[info->itext_num - 1] = alloc_string(key);\n\tinfo->itext_langtags[info->itext_num - 1] = alloc_string(langtag);\n\tinfo->itext_transkeys[info->itext_num - 1] = alloc_string(transkey);\n\tinfo->itext_strings[info->itext_num - 1] = alloc_string(str);\n\n\treturn 0;\n}\n\n/* same as set but does not delete */\nstatic unsigned lodepng_assign_icc(LodePNGInfo* info, const char* name, const unsigned char* profile, unsigned profile_size)\n{\n\tinfo->iccp_name = alloc_string(name);\n\tinfo->iccp_profile = (unsigned char*)lodepng_malloc(profile_size);\n\n\tif (!info->iccp_name || !info->iccp_profile) return 83; /*alloc fail*/\n\n\tmemcpy(info->iccp_profile, profile, profile_size);\n\tinfo->iccp_profile_size = profile_size;\n\n\treturn 0; /*ok*/\n}\n\nunsigned lodepng_set_icc(LodePNGInfo* info, const char* name, const unsigned char* profile, unsigned profile_size)\n{\n\tif (info->iccp_name) lodepng_clear_icc(info);\n\n\treturn lodepng_assign_icc(info, name, profile, profile_size);\n}\n\nvoid lodepng_clear_icc(LodePNGInfo* info)\n{\n\tstring_cleanup(&info->iccp_name);\n\tlodepng_free(info->iccp_profile);\n\tinfo->iccp_profile = NULL;\n\tinfo->iccp_profile_size = 0;\n}\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\nvoid lodepng_info_init(LodePNGInfo* info)\n{\n\tlodepng_color_mode_init(&info->color);\n\tinfo->interlace_method = 0;\n\tinfo->compression_method = 0;\n\tinfo->filter_method = 0;\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\tinfo->background_defined = 0;\n\tinfo->background_r = info->background_g = info->background_b = 0;\n\n\tLodePNGText_init(info);\n\tLodePNGIText_init(info);\n\n\tinfo->time_defined = 0;\n\tinfo->phys_defined = 0;\n\n\tinfo->gama_defined = 0;\n\tinfo->chrm_defined = 0;\n\tinfo->srgb_defined = 0;\n\tinfo->iccp_defined = 0;\n\tinfo->iccp_name = NULL;\n\tinfo->iccp_profile = NULL;\n\n\tLodePNGUnknownChunks_init(info);\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n}\n\nvoid lodepng_info_cleanup(LodePNGInfo* info)\n{\n\tlodepng_color_mode_cleanup(&info->color);\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\tLodePNGText_cleanup(info);\n\tLodePNGIText_cleanup(info);\n\n\tlodepng_clear_icc(info);\n\n\tLodePNGUnknownChunks_cleanup(info);\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n}\n\nunsigned lodepng_info_copy(LodePNGInfo* dest, const LodePNGInfo* source)\n{\n\tlodepng_info_cleanup(dest);\n\t*dest = *source;\n\tlodepng_color_mode_init(&dest->color);\n\tCERROR_TRY_RETURN(lodepng_color_mode_copy(&dest->color, &source->color));\n\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\tCERROR_TRY_RETURN(LodePNGText_copy(dest, source));\n\tCERROR_TRY_RETURN(LodePNGIText_copy(dest, source));\n\tif (source->iccp_defined)\n\t{\n\t\tCERROR_TRY_RETURN(lodepng_assign_icc(dest, source->iccp_name, source->iccp_profile, source->iccp_profile_size));\n\t}\n\n\tLodePNGUnknownChunks_init(dest);\n\tCERROR_TRY_RETURN(LodePNGUnknownChunks_copy(dest, source));\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\treturn 0;\n}\n\n/* ////////////////////////////////////////////////////////////////////////// */\n\n/*index: bitgroup index, bits: bitgroup size(1, 2 or 4), in: bitgroup value, out: octet array to add bits to*/\nstatic void addColorBits(unsigned char* out, size_t index, unsigned bits, unsigned in)\n{\n\tunsigned m = bits == 1 ? 7 : bits == 2 ? 3 : 1; /*8 / bits - 1*/\n\t\t\t\t\t\t\t\t\t\t\t\t\t/*p = the partial index in the byte, e.g. with 4 palettebits it is 0 for first half or 1 for second half*/\n\tunsigned p = index & m;\n\tin &= (1u << bits) - 1u; /*filter out any other bits of the input value*/\n\tin = in << (bits * (m - p));\n\tif (p == 0) out[index * bits / 8] = in;\n\telse out[index * bits / 8] |= in;\n}\n\ntypedef struct ColorTree ColorTree;\n\n/*\nOne node of a color tree\nThis is the data structure used to count the number of unique colors and to get a palette\nindex for a color. It's like an octree, but because the alpha channel is used too, each\nnode has 16 instead of 8 children.\n*/\nstruct ColorTree\n{\n\tColorTree* children[16]; /*up to 16 pointers to ColorTree of next level*/\n\tint index; /*the payload. Only has a meaningful value if this is in the last level*/\n};\n\nstatic void color_tree_init(ColorTree* tree)\n{\n\tint i;\n\tfor (i = 0; i != 16; ++i) tree->children[i] = 0;\n\ttree->index = -1;\n}\n\nstatic void color_tree_cleanup(ColorTree* tree)\n{\n\tint i;\n\tfor (i = 0; i != 16; ++i)\n\t{\n\t\tif (tree->children[i])\n\t\t{\n\t\t\tcolor_tree_cleanup(tree->children[i]);\n\t\t\tlodepng_free(tree->children[i]);\n\t\t}\n\t}\n}\n\n/*returns -1 if color not present, its index otherwise*/\nstatic int color_tree_get(ColorTree* tree, unsigned char r, unsigned char g, unsigned char b, unsigned char a)\n{\n\tint bit = 0;\n\tfor (bit = 0; bit < 8; ++bit)\n\t{\n\t\tint i = 8 * ((r >> bit) & 1) + 4 * ((g >> bit) & 1) + 2 * ((b >> bit) & 1) + 1 * ((a >> bit) & 1);\n\t\tif (!tree->children[i]) return -1;\n\t\telse tree = tree->children[i];\n\t}\n\treturn tree ? tree->index : -1;\n}\n\n#ifdef LODEPNG_COMPILE_ENCODER\nstatic int color_tree_has(ColorTree* tree, unsigned char r, unsigned char g, unsigned char b, unsigned char a)\n{\n\treturn color_tree_get(tree, r, g, b, a) >= 0;\n}\n#endif /*LODEPNG_COMPILE_ENCODER*/\n\n/*color is not allowed to already exist.\nIndex should be >= 0 (it's signed to be compatible with using -1 for \"doesn't exist\")*/\nstatic void color_tree_add(ColorTree* tree,\n\tunsigned char r, unsigned char g, unsigned char b, unsigned char a, unsigned index)\n{\n\tint bit;\n\tfor (bit = 0; bit < 8; ++bit)\n\t{\n\t\tint i = 8 * ((r >> bit) & 1) + 4 * ((g >> bit) & 1) + 2 * ((b >> bit) & 1) + 1 * ((a >> bit) & 1);\n\t\tif (!tree->children[i])\n\t\t{\n\t\t\ttree->children[i] = (ColorTree*)lodepng_malloc(sizeof(ColorTree));\n\t\t\tcolor_tree_init(tree->children[i]);\n\t\t}\n\t\ttree = tree->children[i];\n\t}\n\ttree->index = (int)index;\n}\n\n/*put a pixel, given its RGBA color, into image of any color type*/\nstatic unsigned rgba8ToPixel(unsigned char* out, size_t i,\n\tconst LodePNGColorMode* mode, ColorTree* tree /*for palette*/,\n\tunsigned char r, unsigned char g, unsigned char b, unsigned char a)\n{\n\tif (mode->colortype == LCT_GREY)\n\t{\n\t\tunsigned char grey = r; /*((unsigned short)r + g + b) / 3*/;\n\t\tif (mode->bitdepth == 8) out[i] = grey;\n\t\telse if (mode->bitdepth == 16) out[i * 2 + 0] = out[i * 2 + 1] = grey;\n\t\telse\n\t\t{\n\t\t\t/*take the most significant bits of grey*/\n\t\t\tgrey = (grey >> (8 - mode->bitdepth)) & ((1 << mode->bitdepth) - 1);\n\t\t\taddColorBits(out, i, mode->bitdepth, grey);\n\t\t}\n\t}\n\telse if (mode->colortype == LCT_RGB)\n\t{\n\t\tif (mode->bitdepth == 8)\n\t\t{\n\t\t\tout[i * 3 + 0] = r;\n\t\t\tout[i * 3 + 1] = g;\n\t\t\tout[i * 3 + 2] = b;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tout[i * 6 + 0] = out[i * 6 + 1] = r;\n\t\t\tout[i * 6 + 2] = out[i * 6 + 3] = g;\n\t\t\tout[i * 6 + 4] = out[i * 6 + 5] = b;\n\t\t}\n\t}\n\telse if (mode->colortype == LCT_PALETTE)\n\t{\n\t\tint index = color_tree_get(tree, r, g, b, a);\n\t\tif (index < 0) return 82; /*color not in palette*/\n\t\tif (mode->bitdepth == 8) out[i] = index;\n\t\telse addColorBits(out, i, mode->bitdepth, (unsigned)index);\n\t}\n\telse if (mode->colortype == LCT_GREY_ALPHA)\n\t{\n\t\tunsigned char grey = r; /*((unsigned short)r + g + b) / 3*/;\n\t\tif (mode->bitdepth == 8)\n\t\t{\n\t\t\tout[i * 2 + 0] = grey;\n\t\t\tout[i * 2 + 1] = a;\n\t\t}\n\t\telse if (mode->bitdepth == 16)\n\t\t{\n\t\t\tout[i * 4 + 0] = out[i * 4 + 1] = grey;\n\t\t\tout[i * 4 + 2] = out[i * 4 + 3] = a;\n\t\t}\n\t}\n\telse if (mode->colortype == LCT_RGBA)\n\t{\n\t\tif (mode->bitdepth == 8)\n\t\t{\n\t\t\tout[i * 4 + 0] = r;\n\t\t\tout[i * 4 + 1] = g;\n\t\t\tout[i * 4 + 2] = b;\n\t\t\tout[i * 4 + 3] = a;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tout[i * 8 + 0] = out[i * 8 + 1] = r;\n\t\t\tout[i * 8 + 2] = out[i * 8 + 3] = g;\n\t\t\tout[i * 8 + 4] = out[i * 8 + 5] = b;\n\t\t\tout[i * 8 + 6] = out[i * 8 + 7] = a;\n\t\t}\n\t}\n\n\treturn 0; /*no error*/\n}\n\n/*put a pixel, given its RGBA16 color, into image of any color 16-bitdepth type*/\nstatic void rgba16ToPixel(unsigned char* out, size_t i,\n\tconst LodePNGColorMode* mode,\n\tunsigned short r, unsigned short g, unsigned short b, unsigned short a)\n{\n\tif (mode->colortype == LCT_GREY)\n\t{\n\t\tunsigned short grey = r; /*((unsigned)r + g + b) / 3*/;\n\t\tout[i * 2 + 0] = (grey >> 8) & 255;\n\t\tout[i * 2 + 1] = grey & 255;\n\t}\n\telse if (mode->colortype == LCT_RGB)\n\t{\n\t\tout[i * 6 + 0] = (r >> 8) & 255;\n\t\tout[i * 6 + 1] = r & 255;\n\t\tout[i * 6 + 2] = (g >> 8) & 255;\n\t\tout[i * 6 + 3] = g & 255;\n\t\tout[i * 6 + 4] = (b >> 8) & 255;\n\t\tout[i * 6 + 5] = b & 255;\n\t}\n\telse if (mode->colortype == LCT_GREY_ALPHA)\n\t{\n\t\tunsigned short grey = r; /*((unsigned)r + g + b) / 3*/;\n\t\tout[i * 4 + 0] = (grey >> 8) & 255;\n\t\tout[i * 4 + 1] = grey & 255;\n\t\tout[i * 4 + 2] = (a >> 8) & 255;\n\t\tout[i * 4 + 3] = a & 255;\n\t}\n\telse if (mode->colortype == LCT_RGBA)\n\t{\n\t\tout[i * 8 + 0] = (r >> 8) & 255;\n\t\tout[i * 8 + 1] = r & 255;\n\t\tout[i * 8 + 2] = (g >> 8) & 255;\n\t\tout[i * 8 + 3] = g & 255;\n\t\tout[i * 8 + 4] = (b >> 8) & 255;\n\t\tout[i * 8 + 5] = b & 255;\n\t\tout[i * 8 + 6] = (a >> 8) & 255;\n\t\tout[i * 8 + 7] = a & 255;\n\t}\n}\n\n/*Get RGBA8 color of pixel with index i (y * width + x) from the raw image with given color type.*/\nstatic void getPixelColorRGBA8(unsigned char* r, unsigned char* g,\n\tunsigned char* b, unsigned char* a,\n\tconst unsigned char* in, size_t i,\n\tconst LodePNGColorMode* mode)\n{\n\tif (mode->colortype == LCT_GREY)\n\t{\n\t\tif (mode->bitdepth == 8)\n\t\t{\n\t\t\t*r = *g = *b = in[i];\n\t\t\tif (mode->key_defined && *r == mode->key_r) *a = 0;\n\t\t\telse *a = 255;\n\t\t}\n\t\telse if (mode->bitdepth == 16)\n\t\t{\n\t\t\t*r = *g = *b = in[i * 2 + 0];\n\t\t\tif (mode->key_defined && 256U * in[i * 2 + 0] + in[i * 2 + 1] == mode->key_r) *a = 0;\n\t\t\telse *a = 255;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tunsigned highest = ((1U << mode->bitdepth) - 1U); /*highest possible value for this bit depth*/\n\t\t\tsize_t j = i * mode->bitdepth;\n\t\t\tunsigned value = readBitsFromReversedStream(&j, in, mode->bitdepth);\n\t\t\t*r = *g = *b = (value * 255) / highest;\n\t\t\tif (mode->key_defined && value == mode->key_r) *a = 0;\n\t\t\telse *a = 255;\n\t\t}\n\t}\n\telse if (mode->colortype == LCT_RGB)\n\t{\n\t\tif (mode->bitdepth == 8)\n\t\t{\n\t\t\t*r = in[i * 3 + 0]; *g = in[i * 3 + 1]; *b = in[i * 3 + 2];\n\t\t\tif (mode->key_defined && *r == mode->key_r && *g == mode->key_g && *b == mode->key_b) *a = 0;\n\t\t\telse *a = 255;\n\t\t}\n\t\telse\n\t\t{\n\t\t\t*r = in[i * 6 + 0];\n\t\t\t*g = in[i * 6 + 2];\n\t\t\t*b = in[i * 6 + 4];\n\t\t\tif (mode->key_defined && 256U * in[i * 6 + 0] + in[i * 6 + 1] == mode->key_r\n\t\t\t\t&& 256U * in[i * 6 + 2] + in[i * 6 + 3] == mode->key_g\n\t\t\t\t&& 256U * in[i * 6 + 4] + in[i * 6 + 5] == mode->key_b) *a = 0;\n\t\t\telse *a = 255;\n\t\t}\n\t}\n\telse if (mode->colortype == LCT_PALETTE)\n\t{\n\t\tunsigned index;\n\t\tif (mode->bitdepth == 8) index = in[i];\n\t\telse\n\t\t{\n\t\t\tsize_t j = i * mode->bitdepth;\n\t\t\tindex = readBitsFromReversedStream(&j, in, mode->bitdepth);\n\t\t}\n\n\t\tif (index >= mode->palettesize)\n\t\t{\n\t\t\t/*This is an error according to the PNG spec, but common PNG decoders make it black instead.\n\t\t\tDone here too, slightly faster due to no error handling needed.*/\n\t\t\t*r = *g = *b = 0;\n\t\t\t*a = 255;\n\t\t}\n\t\telse\n\t\t{\n\t\t\t*r = mode->palette[index * 4 + 0];\n\t\t\t*g = mode->palette[index * 4 + 1];\n\t\t\t*b = mode->palette[index * 4 + 2];\n\t\t\t*a = mode->palette[index * 4 + 3];\n\t\t}\n\t}\n\telse if (mode->colortype == LCT_GREY_ALPHA)\n\t{\n\t\tif (mode->bitdepth == 8)\n\t\t{\n\t\t\t*r = *g = *b = in[i * 2 + 0];\n\t\t\t*a = in[i * 2 + 1];\n\t\t}\n\t\telse\n\t\t{\n\t\t\t*r = *g = *b = in[i * 4 + 0];\n\t\t\t*a = in[i * 4 + 2];\n\t\t}\n\t}\n\telse if (mode->colortype == LCT_RGBA)\n\t{\n\t\tif (mode->bitdepth == 8)\n\t\t{\n\t\t\t*r = in[i * 4 + 0];\n\t\t\t*g = in[i * 4 + 1];\n\t\t\t*b = in[i * 4 + 2];\n\t\t\t*a = in[i * 4 + 3];\n\t\t}\n\t\telse\n\t\t{\n\t\t\t*r = in[i * 8 + 0];\n\t\t\t*g = in[i * 8 + 2];\n\t\t\t*b = in[i * 8 + 4];\n\t\t\t*a = in[i * 8 + 6];\n\t\t}\n\t}\n}\n\n/*Similar to getPixelColorRGBA8, but with all the for loops inside of the color\nmode test cases, optimized to convert the colors much faster, when converting\nto RGBA or RGB with 8 bit per cannel. buffer must be RGBA or RGB output with\nenough memory, if has_alpha is true the output is RGBA. mode has the color mode\nof the input buffer.*/\nstatic void getPixelColorsRGBA8(unsigned char* buffer, size_t numpixels,\n\tunsigned has_alpha, const unsigned char* in,\n\tconst LodePNGColorMode* mode)\n{\n\tunsigned num_channels = has_alpha ? 4 : 3;\n\tsize_t i;\n\tif (mode->colortype == LCT_GREY)\n\t{\n\t\tif (mode->bitdepth == 8)\n\t\t{\n\t\t\tfor (i = 0; i != numpixels; ++i, buffer += num_channels)\n\t\t\t{\n\t\t\t\tbuffer[0] = buffer[1] = buffer[2] = in[i];\n\t\t\t\tif (has_alpha) buffer[3] = mode->key_defined && in[i] == mode->key_r ? 0 : 255;\n\t\t\t}\n\t\t}\n\t\telse if (mode->bitdepth == 16)\n\t\t{\n\t\t\tfor (i = 0; i != numpixels; ++i, buffer += num_channels)\n\t\t\t{\n\t\t\t\tbuffer[0] = buffer[1] = buffer[2] = in[i * 2];\n\t\t\t\tif (has_alpha) buffer[3] = mode->key_defined && 256U * in[i * 2 + 0] + in[i * 2 + 1] == mode->key_r ? 0 : 255;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tunsigned highest = ((1U << mode->bitdepth) - 1U); /*highest possible value for this bit depth*/\n\t\t\tsize_t j = 0;\n\t\t\tfor (i = 0; i != numpixels; ++i, buffer += num_channels)\n\t\t\t{\n\t\t\t\tunsigned value = readBitsFromReversedStream(&j, in, mode->bitdepth);\n\t\t\t\tbuffer[0] = buffer[1] = buffer[2] = (value * 255) / highest;\n\t\t\t\tif (has_alpha) buffer[3] = mode->key_defined && value == mode->key_r ? 0 : 255;\n\t\t\t}\n\t\t}\n\t}\n\telse if (mode->colortype == LCT_RGB)\n\t{\n\t\tif (mode->bitdepth == 8)\n\t\t{\n\t\t\tfor (i = 0; i != numpixels; ++i, buffer += num_channels)\n\t\t\t{\n\t\t\t\tbuffer[0] = in[i * 3 + 0];\n\t\t\t\tbuffer[1] = in[i * 3 + 1];\n\t\t\t\tbuffer[2] = in[i * 3 + 2];\n\t\t\t\tif (has_alpha) buffer[3] = mode->key_defined && buffer[0] == mode->key_r\n\t\t\t\t\t&& buffer[1] == mode->key_g && buffer[2] == mode->key_b ? 0 : 255;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tfor (i = 0; i != numpixels; ++i, buffer += num_channels)\n\t\t\t{\n\t\t\t\tbuffer[0] = in[i * 6 + 0];\n\t\t\t\tbuffer[1] = in[i * 6 + 2];\n\t\t\t\tbuffer[2] = in[i * 6 + 4];\n\t\t\t\tif (has_alpha) buffer[3] = mode->key_defined\n\t\t\t\t\t&& 256U * in[i * 6 + 0] + in[i * 6 + 1] == mode->key_r\n\t\t\t\t\t&& 256U * in[i * 6 + 2] + in[i * 6 + 3] == mode->key_g\n\t\t\t\t\t&& 256U * in[i * 6 + 4] + in[i * 6 + 5] == mode->key_b ? 0 : 255;\n\t\t\t}\n\t\t}\n\t}\n\telse if (mode->colortype == LCT_PALETTE)\n\t{\n\t\tunsigned index;\n\t\tsize_t j = 0;\n\t\tfor (i = 0; i != numpixels; ++i, buffer += num_channels)\n\t\t{\n\t\t\tif (mode->bitdepth == 8) index = in[i];\n\t\t\telse index = readBitsFromReversedStream(&j, in, mode->bitdepth);\n\n\t\t\tif (index >= mode->palettesize)\n\t\t\t{\n\t\t\t\t/*This is an error according to the PNG spec, but most PNG decoders make it black instead.\n\t\t\t\tDone here too, slightly faster due to no error handling needed.*/\n\t\t\t\tbuffer[0] = buffer[1] = buffer[2] = 0;\n\t\t\t\tif (has_alpha) buffer[3] = 255;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tbuffer[0] = mode->palette[index * 4 + 0];\n\t\t\t\tbuffer[1] = mode->palette[index * 4 + 1];\n\t\t\t\tbuffer[2] = mode->palette[index * 4 + 2];\n\t\t\t\tif (has_alpha) buffer[3] = mode->palette[index * 4 + 3];\n\t\t\t}\n\t\t}\n\t}\n\telse if (mode->colortype == LCT_GREY_ALPHA)\n\t{\n\t\tif (mode->bitdepth == 8)\n\t\t{\n\t\t\tfor (i = 0; i != numpixels; ++i, buffer += num_channels)\n\t\t\t{\n\t\t\t\tbuffer[0] = buffer[1] = buffer[2] = in[i * 2 + 0];\n\t\t\t\tif (has_alpha) buffer[3] = in[i * 2 + 1];\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tfor (i = 0; i != numpixels; ++i, buffer += num_channels)\n\t\t\t{\n\t\t\t\tbuffer[0] = buffer[1] = buffer[2] = in[i * 4 + 0];\n\t\t\t\tif (has_alpha) buffer[3] = in[i * 4 + 2];\n\t\t\t}\n\t\t}\n\t}\n\telse if (mode->colortype == LCT_RGBA)\n\t{\n\t\tif (mode->bitdepth == 8)\n\t\t{\n\t\t\tfor (i = 0; i != numpixels; ++i, buffer += num_channels)\n\t\t\t{\n\t\t\t\tbuffer[0] = in[i * 4 + 0];\n\t\t\t\tbuffer[1] = in[i * 4 + 1];\n\t\t\t\tbuffer[2] = in[i * 4 + 2];\n\t\t\t\tif (has_alpha) buffer[3] = in[i * 4 + 3];\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tfor (i = 0; i != numpixels; ++i, buffer += num_channels)\n\t\t\t{\n\t\t\t\tbuffer[0] = in[i * 8 + 0];\n\t\t\t\tbuffer[1] = in[i * 8 + 2];\n\t\t\t\tbuffer[2] = in[i * 8 + 4];\n\t\t\t\tif (has_alpha) buffer[3] = in[i * 8 + 6];\n\t\t\t}\n\t\t}\n\t}\n}\n\n/*Get RGBA16 color of pixel with index i (y * width + x) from the raw image with\ngiven color type, but the given color type must be 16-bit itself.*/\nstatic void getPixelColorRGBA16(unsigned short* r, unsigned short* g, unsigned short* b, unsigned short* a,\n\tconst unsigned char* in, size_t i, const LodePNGColorMode* mode)\n{\n\tif (mode->colortype == LCT_GREY)\n\t{\n\t\t*r = *g = *b = 256 * in[i * 2 + 0] + in[i * 2 + 1];\n\t\tif (mode->key_defined && 256U * in[i * 2 + 0] + in[i * 2 + 1] == mode->key_r) *a = 0;\n\t\telse *a = 65535;\n\t}\n\telse if (mode->colortype == LCT_RGB)\n\t{\n\t\t*r = 256u * in[i * 6 + 0] + in[i * 6 + 1];\n\t\t*g = 256u * in[i * 6 + 2] + in[i * 6 + 3];\n\t\t*b = 256u * in[i * 6 + 4] + in[i * 6 + 5];\n\t\tif (mode->key_defined\n\t\t\t&& 256u * in[i * 6 + 0] + in[i * 6 + 1] == mode->key_r\n\t\t\t&& 256u * in[i * 6 + 2] + in[i * 6 + 3] == mode->key_g\n\t\t\t&& 256u * in[i * 6 + 4] + in[i * 6 + 5] == mode->key_b) *a = 0;\n\t\telse *a = 65535;\n\t}\n\telse if (mode->colortype == LCT_GREY_ALPHA)\n\t{\n\t\t*r = *g = *b = 256u * in[i * 4 + 0] + in[i * 4 + 1];\n\t\t*a = 256u * in[i * 4 + 2] + in[i * 4 + 3];\n\t}\n\telse if (mode->colortype == LCT_RGBA)\n\t{\n\t\t*r = 256u * in[i * 8 + 0] + in[i * 8 + 1];\n\t\t*g = 256u * in[i * 8 + 2] + in[i * 8 + 3];\n\t\t*b = 256u * in[i * 8 + 4] + in[i * 8 + 5];\n\t\t*a = 256u * in[i * 8 + 6] + in[i * 8 + 7];\n\t}\n}\n\nunsigned lodepng_convert(unsigned char* out, const unsigned char* in,\n\tconst LodePNGColorMode* mode_out, const LodePNGColorMode* mode_in,\n\tunsigned w, unsigned h)\n{\n\tsize_t i;\n\tColorTree tree;\n\tsize_t numpixels = (size_t)w * (size_t)h;\n\tunsigned error = 0;\n\n\tif (lodepng_color_mode_equal(mode_out, mode_in))\n\t{\n\t\tsize_t numbytes = lodepng_get_raw_size(w, h, mode_in);\n\t\tfor (i = 0; i != numbytes; ++i) out[i] = in[i];\n\t\treturn 0;\n\t}\n\n\tif (mode_out->colortype == LCT_PALETTE)\n\t{\n\t\tsize_t palettesize = mode_out->palettesize;\n\t\tconst unsigned char* palette = mode_out->palette;\n\t\tsize_t palsize = (size_t)1u << mode_out->bitdepth;\n\t\t/*if the user specified output palette but did not give the values, assume\n\t\tthey want the values of the input color type (assuming that one is palette).\n\t\tNote that we never create a new palette ourselves.*/\n\t\tif (palettesize == 0)\n\t\t{\n\t\t\tpalettesize = mode_in->palettesize;\n\t\t\tpalette = mode_in->palette;\n\t\t\t/*if the input was also palette with same bitdepth, then the color types are also\n\t\t\tequal, so copy literally. This to preserve the exact indices that were in the PNG\n\t\t\teven in case there are duplicate colors in the palette.*/\n\t\t\tif (mode_in->colortype == LCT_PALETTE && mode_in->bitdepth == mode_out->bitdepth)\n\t\t\t{\n\t\t\t\tsize_t numbytes = lodepng_get_raw_size(w, h, mode_in);\n\t\t\t\tfor (i = 0; i != numbytes; ++i) out[i] = in[i];\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t}\n\t\tif (palettesize < palsize) palsize = palettesize;\n\t\tcolor_tree_init(&tree);\n\t\tfor (i = 0; i != palsize; ++i)\n\t\t{\n\t\t\tconst unsigned char* p = &palette[i * 4];\n\t\t\tcolor_tree_add(&tree, p[0], p[1], p[2], p[3], (unsigned)i);\n\t\t}\n\t}\n\n\tif (mode_in->bitdepth == 16 && mode_out->bitdepth == 16)\n\t{\n\t\tfor (i = 0; i != numpixels; ++i)\n\t\t{\n\t\t\tunsigned short r = 0, g = 0, b = 0, a = 0;\n\t\t\tgetPixelColorRGBA16(&r, &g, &b, &a, in, i, mode_in);\n\t\t\trgba16ToPixel(out, i, mode_out, r, g, b, a);\n\t\t}\n\t}\n\telse if (mode_out->bitdepth == 8 && mode_out->colortype == LCT_RGBA)\n\t{\n\t\tgetPixelColorsRGBA8(out, numpixels, 1, in, mode_in);\n\t}\n\telse if (mode_out->bitdepth == 8 && mode_out->colortype == LCT_RGB)\n\t{\n\t\tgetPixelColorsRGBA8(out, numpixels, 0, in, mode_in);\n\t}\n\telse\n\t{\n\t\tunsigned char r = 0, g = 0, b = 0, a = 0;\n\t\tfor (i = 0; i != numpixels; ++i)\n\t\t{\n\t\t\tgetPixelColorRGBA8(&r, &g, &b, &a, in, i, mode_in);\n\t\t\terror = rgba8ToPixel(out, i, mode_out, &tree, r, g, b, a);\n\t\t\tif (error) break;\n\t\t}\n\t}\n\n\tif (mode_out->colortype == LCT_PALETTE)\n\t{\n\t\tcolor_tree_cleanup(&tree);\n\t}\n\n\treturn error;\n}\n\n\n/* Converts a single rgb color without alpha from one type to another, color bits truncated to\ntheir bitdepth. In case of single channel (grey or palette), only the r channel is used. Slow\nfunction, do not use to process all pixels of an image. Alpha channel not supported on purpose:\nthis is for bKGD, supporting alpha may prevent it from finding a color in the palette, from the\nspecification it looks like bKGD should ignore the alpha values of the palette since it can use\nany palette index but doesn't have an alpha channel. Idem with ignoring color key. */\nunsigned lodepng_convert_rgb(\n\tunsigned* r_out, unsigned* g_out, unsigned* b_out,\n\tunsigned r_in, unsigned g_in, unsigned b_in,\n\tconst LodePNGColorMode* mode_out, const LodePNGColorMode* mode_in)\n{\n\tunsigned r = 0, g = 0, b = 0;\n\tunsigned mul = 65535 / ((1 << mode_in->bitdepth) - 1); /*65535, 21845, 4369, 257, 1*/\n\tunsigned shift = 16 - mode_out->bitdepth;\n\n\tif (mode_in->colortype == LCT_GREY || mode_in->colortype == LCT_GREY_ALPHA)\n\t{\n\t\tr = g = b = r_in * mul;\n\t}\n\telse if (mode_in->colortype == LCT_RGB || mode_in->colortype == LCT_RGBA)\n\t{\n\t\tr = r_in * mul;\n\t\tg = g_in * mul;\n\t\tb = b_in * mul;\n\t}\n\telse if (mode_in->colortype == LCT_PALETTE)\n\t{\n\t\tif (r_in >= mode_in->palettesize) return 82;\n\t\tr = mode_in->palette[r_in * 4 + 0] * 257;\n\t\tg = mode_in->palette[r_in * 4 + 1] * 257;\n\t\tb = mode_in->palette[r_in * 4 + 2] * 257;\n\t}\n\telse\n\t{\n\t\treturn 31;\n\t}\n\n\t/* now convert to output format */\n\tif (mode_out->colortype == LCT_GREY || mode_out->colortype == LCT_GREY_ALPHA)\n\t{\n\t\t*r_out = r >> shift;\n\t}\n\telse if (mode_out->colortype == LCT_RGB || mode_out->colortype == LCT_RGBA)\n\t{\n\t\t*r_out = r >> shift;\n\t\t*g_out = g >> shift;\n\t\t*b_out = b >> shift;\n\t}\n\telse if (mode_out->colortype == LCT_PALETTE)\n\t{\n\t\tunsigned i;\n\t\t/* a 16-bit color cannot be in the palette */\n\t\tif ((r >> 8) != (r & 255) || (g >> 8) != (g & 255) || (b >> 8) != (b & 255)) return 82;\n\t\tfor (i = 0; i < mode_out->palettesize; i++) {\n\t\t\tunsigned j = i * 4;\n\t\t\tif ((r >> 8) == mode_out->palette[j + 0] && (g >> 8) == mode_out->palette[j + 1] &&\n\t\t\t\t(b >> 8) == mode_out->palette[j + 2])\n\t\t\t{\n\t\t\t\t*r_out = i;\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t}\n\t\treturn 82;\n\t}\n\telse\n\t{\n\t\treturn 31;\n\t}\n\n\treturn 0;\n}\n\n#ifdef LODEPNG_COMPILE_ENCODER\n\nvoid lodepng_color_profile_init(LodePNGColorProfile* profile)\n{\n\tprofile->colored = 0;\n\tprofile->key = 0;\n\tprofile->key_r = profile->key_g = profile->key_b = 0;\n\tprofile->alpha = 0;\n\tprofile->numcolors = 0;\n\tprofile->bits = 1;\n\tprofile->numpixels = 0;\n}\n\n/*function used for debug purposes with C++*/\n/*void printColorProfile(LodePNGColorProfile* p)\n{\nstd::cout << \"colored: \" << (int)p->colored << \", \";\nstd::cout << \"key: \" << (int)p->key << \", \";\nstd::cout << \"key_r: \" << (int)p->key_r << \", \";\nstd::cout << \"key_g: \" << (int)p->key_g << \", \";\nstd::cout << \"key_b: \" << (int)p->key_b << \", \";\nstd::cout << \"alpha: \" << (int)p->alpha << \", \";\nstd::cout << \"numcolors: \" << (int)p->numcolors << \", \";\nstd::cout << \"bits: \" << (int)p->bits << std::endl;\n}*/\n\n/*Returns how many bits needed to represent given value (max 8 bit)*/\nstatic unsigned getValueRequiredBits(unsigned char value)\n{\n\tif (value == 0 || value == 255) return 1;\n\t/*The scaling of 2-bit and 4-bit values uses multiples of 85 and 17*/\n\tif (value % 17 == 0) return value % 85 == 0 ? 2 : 4;\n\treturn 8;\n}\n\n/*profile must already have been inited.\nIt's ok to set some parameters of profile to done already.*/\nunsigned lodepng_get_color_profile(LodePNGColorProfile* profile,\n\tconst unsigned char* in, unsigned w, unsigned h,\n\tconst LodePNGColorMode* mode_in)\n{\n\tunsigned error = 0;\n\tsize_t i;\n\tColorTree tree;\n\tsize_t numpixels = (size_t)w * (size_t)h;\n\n\t/* mark things as done already if it would be impossible to have a more expensive case */\n\tunsigned colored_done = lodepng_is_greyscale_type(mode_in) ? 1 : 0;\n\tunsigned alpha_done = lodepng_can_have_alpha(mode_in) ? 0 : 1;\n\tunsigned numcolors_done = 0;\n\tunsigned bpp = lodepng_get_bpp(mode_in);\n\tunsigned bits_done = (profile->bits == 1 && bpp == 1) ? 1 : 0;\n\tunsigned sixteen = 0; /* whether the input image is 16 bit */\n\tunsigned maxnumcolors = 257;\n\tif (bpp <= 8) maxnumcolors = LODEPNG_MIN(257, profile->numcolors + (1 << bpp));\n\n\tprofile->numpixels += numpixels;\n\n\tcolor_tree_init(&tree);\n\n\t/*If the profile was already filled in from previous data, fill its palette in tree\n\tand mark things as done already if we know they are the most expensive case already*/\n\tif (profile->alpha) alpha_done = 1;\n\tif (profile->colored) colored_done = 1;\n\tif (profile->bits == 16) numcolors_done = 1;\n\tif (profile->bits >= bpp) bits_done = 1;\n\tif (profile->numcolors >= maxnumcolors) numcolors_done = 1;\n\n\tif (!numcolors_done)\n\t{\n\t\tfor (i = 0; i < profile->numcolors; i++)\n\t\t{\n\t\t\tconst unsigned char* color = &profile->palette[i * 4];\n\t\t\tcolor_tree_add(&tree, color[0], color[1], color[2], color[3], i);\n\t\t}\n\t}\n\n\t/*Check if the 16-bit input is truly 16-bit*/\n\tif (mode_in->bitdepth == 16 && !sixteen)\n\t{\n\t\tunsigned short r, g, b, a;\n\t\tfor (i = 0; i != numpixels; ++i)\n\t\t{\n\t\t\tgetPixelColorRGBA16(&r, &g, &b, &a, in, i, mode_in);\n\t\t\tif ((r & 255) != ((r >> 8) & 255) || (g & 255) != ((g >> 8) & 255) ||\n\t\t\t\t(b & 255) != ((b >> 8) & 255) || (a & 255) != ((a >> 8) & 255)) /*first and second byte differ*/\n\t\t\t{\n\t\t\t\tprofile->bits = 16;\n\t\t\t\tsixteen = 1;\n\t\t\t\tbits_done = 1;\n\t\t\t\tnumcolors_done = 1; /*counting colors no longer useful, palette doesn't support 16-bit*/\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (sixteen)\n\t{\n\t\tunsigned short r = 0, g = 0, b = 0, a = 0;\n\n\t\tfor (i = 0; i != numpixels; ++i)\n\t\t{\n\t\t\tgetPixelColorRGBA16(&r, &g, &b, &a, in, i, mode_in);\n\n\t\t\tif (!colored_done && (r != g || r != b))\n\t\t\t{\n\t\t\t\tprofile->colored = 1;\n\t\t\t\tcolored_done = 1;\n\t\t\t}\n\n\t\t\tif (!alpha_done)\n\t\t\t{\n\t\t\t\tunsigned matchkey = (r == profile->key_r && g == profile->key_g && b == profile->key_b);\n\t\t\t\tif (a != 65535 && (a != 0 || (profile->key && !matchkey)))\n\t\t\t\t{\n\t\t\t\t\tprofile->alpha = 1;\n\t\t\t\t\tprofile->key = 0;\n\t\t\t\t\talpha_done = 1;\n\t\t\t\t}\n\t\t\t\telse if (a == 0 && !profile->alpha && !profile->key)\n\t\t\t\t{\n\t\t\t\t\tprofile->key = 1;\n\t\t\t\t\tprofile->key_r = r;\n\t\t\t\t\tprofile->key_g = g;\n\t\t\t\t\tprofile->key_b = b;\n\t\t\t\t}\n\t\t\t\telse if (a == 65535 && profile->key && matchkey)\n\t\t\t\t{\n\t\t\t\t\t/* Color key cannot be used if an opaque pixel also has that RGB color. */\n\t\t\t\t\tprofile->alpha = 1;\n\t\t\t\t\tprofile->key = 0;\n\t\t\t\t\talpha_done = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (alpha_done && numcolors_done && colored_done && bits_done) break;\n\t\t}\n\n\t\tif (profile->key && !profile->alpha)\n\t\t{\n\t\t\tfor (i = 0; i != numpixels; ++i)\n\t\t\t{\n\t\t\t\tgetPixelColorRGBA16(&r, &g, &b, &a, in, i, mode_in);\n\t\t\t\tif (a != 0 && r == profile->key_r && g == profile->key_g && b == profile->key_b)\n\t\t\t\t{\n\t\t\t\t\t/* Color key cannot be used if an opaque pixel also has that RGB color. */\n\t\t\t\t\tprofile->alpha = 1;\n\t\t\t\t\tprofile->key = 0;\n\t\t\t\t\talpha_done = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\telse /* < 16-bit */\n\t{\n\t\tunsigned char r = 0, g = 0, b = 0, a = 0;\n\t\tfor (i = 0; i != numpixels; ++i)\n\t\t{\n\t\t\tgetPixelColorRGBA8(&r, &g, &b, &a, in, i, mode_in);\n\n\t\t\tif (!bits_done && profile->bits < 8)\n\t\t\t{\n\t\t\t\t/*only r is checked, < 8 bits is only relevant for greyscale*/\n\t\t\t\tunsigned bits = getValueRequiredBits(r);\n\t\t\t\tif (bits > profile->bits) profile->bits = bits;\n\t\t\t}\n\t\t\tbits_done = (profile->bits >= bpp);\n\n\t\t\tif (!colored_done && (r != g || r != b))\n\t\t\t{\n\t\t\t\tprofile->colored = 1;\n\t\t\t\tcolored_done = 1;\n\t\t\t\tif (profile->bits < 8) profile->bits = 8; /*PNG has no colored modes with less than 8-bit per channel*/\n\t\t\t}\n\n\t\t\tif (!alpha_done)\n\t\t\t{\n\t\t\t\tunsigned matchkey = (r == profile->key_r && g == profile->key_g && b == profile->key_b);\n\t\t\t\tif (a != 255 && (a != 0 || (profile->key && !matchkey)))\n\t\t\t\t{\n\t\t\t\t\tprofile->alpha = 1;\n\t\t\t\t\tprofile->key = 0;\n\t\t\t\t\talpha_done = 1;\n\t\t\t\t\tif (profile->bits < 8) profile->bits = 8; /*PNG has no alphachannel modes with less than 8-bit per channel*/\n\t\t\t\t}\n\t\t\t\telse if (a == 0 && !profile->alpha && !profile->key)\n\t\t\t\t{\n\t\t\t\t\tprofile->key = 1;\n\t\t\t\t\tprofile->key_r = r;\n\t\t\t\t\tprofile->key_g = g;\n\t\t\t\t\tprofile->key_b = b;\n\t\t\t\t}\n\t\t\t\telse if (a == 255 && profile->key && matchkey)\n\t\t\t\t{\n\t\t\t\t\t/* Color key cannot be used if an opaque pixel also has that RGB color. */\n\t\t\t\t\tprofile->alpha = 1;\n\t\t\t\t\tprofile->key = 0;\n\t\t\t\t\talpha_done = 1;\n\t\t\t\t\tif (profile->bits < 8) profile->bits = 8; /*PNG has no alphachannel modes with less than 8-bit per channel*/\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (!numcolors_done)\n\t\t\t{\n\t\t\t\tif (!color_tree_has(&tree, r, g, b, a))\n\t\t\t\t{\n\t\t\t\t\tcolor_tree_add(&tree, r, g, b, a, profile->numcolors);\n\t\t\t\t\tif (profile->numcolors < 256)\n\t\t\t\t\t{\n\t\t\t\t\t\tunsigned char* p = profile->palette;\n\t\t\t\t\t\tunsigned n = profile->numcolors;\n\t\t\t\t\t\tp[n * 4 + 0] = r;\n\t\t\t\t\t\tp[n * 4 + 1] = g;\n\t\t\t\t\t\tp[n * 4 + 2] = b;\n\t\t\t\t\t\tp[n * 4 + 3] = a;\n\t\t\t\t\t}\n\t\t\t\t\t++profile->numcolors;\n\t\t\t\t\tnumcolors_done = profile->numcolors >= maxnumcolors;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (alpha_done && numcolors_done && colored_done && bits_done) break;\n\t\t}\n\n\t\tif (profile->key && !profile->alpha)\n\t\t{\n\t\t\tfor (i = 0; i != numpixels; ++i)\n\t\t\t{\n\t\t\t\tgetPixelColorRGBA8(&r, &g, &b, &a, in, i, mode_in);\n\t\t\t\tif (a != 0 && r == profile->key_r && g == profile->key_g && b == profile->key_b)\n\t\t\t\t{\n\t\t\t\t\t/* Color key cannot be used if an opaque pixel also has that RGB color. */\n\t\t\t\t\tprofile->alpha = 1;\n\t\t\t\t\tprofile->key = 0;\n\t\t\t\t\talpha_done = 1;\n\t\t\t\t\tif (profile->bits < 8) profile->bits = 8; /*PNG has no alphachannel modes with less than 8-bit per channel*/\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t/*make the profile's key always 16-bit for consistency - repeat each byte twice*/\n\t\tprofile->key_r += (profile->key_r << 8);\n\t\tprofile->key_g += (profile->key_g << 8);\n\t\tprofile->key_b += (profile->key_b << 8);\n\t}\n\n\tcolor_tree_cleanup(&tree);\n\treturn error;\n}\n\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n/*Adds a single color to the color profile. The profile must already have been inited. The color must be given as 16-bit\n(with 2 bytes repeating for 8-bit and 65535 for opaque alpha channel). This function is expensive, do not call it for\nall pixels of an image but only for a few additional values. */\nstatic unsigned lodepng_color_profile_add(LodePNGColorProfile* profile,\n\tunsigned r, unsigned g, unsigned b, unsigned a)\n{\n\tunsigned error = 0;\n\tunsigned char image[8];\n\tLodePNGColorMode mode;\n\tlodepng_color_mode_init(&mode);\n\timage[0] = r >> 8; image[1] = r; image[2] = g >> 8; image[3] = g;\n\timage[4] = b >> 8; image[5] = b; image[6] = a >> 8; image[7] = a;\n\tmode.bitdepth = 16;\n\tmode.colortype = LCT_RGBA;\n\terror = lodepng_get_color_profile(profile, image, 1, 1, &mode);\n\tlodepng_color_mode_cleanup(&mode);\n\treturn error;\n}\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\n/*Autochoose color model given the computed profile. mode_in is to copy palette order from\nwhen relevant.*/\nstatic unsigned auto_choose_color_from_profile(LodePNGColorMode* mode_out,\n\tconst LodePNGColorMode* mode_in,\n\tconst LodePNGColorProfile* prof)\n{\n\tunsigned error = 0;\n\tunsigned palettebits, palette_ok;\n\tsize_t i, n;\n\tsize_t numpixels = prof->numpixels;\n\n\tunsigned alpha = prof->alpha;\n\tunsigned key = prof->key;\n\tunsigned bits = prof->bits;\n\n\tmode_out->key_defined = 0;\n\n\tif (key && numpixels <= 16)\n\t{\n\t\talpha = 1; /*too few pixels to justify tRNS chunk overhead*/\n\t\tkey = 0;\n\t\tif (bits < 8) bits = 8; /*PNG has no alphachannel modes with less than 8-bit per channel*/\n\t}\n\tn = prof->numcolors;\n\tpalettebits = n <= 2 ? 1 : (n <= 4 ? 2 : (n <= 16 ? 4 : 8));\n\tpalette_ok = n <= 256 && bits <= 8;\n\tif (numpixels < n * 2) palette_ok = 0; /*don't add palette overhead if image has only a few pixels*/\n\tif (!prof->colored && bits <= palettebits) palette_ok = 0; /*grey is less overhead*/\n\n\tif (palette_ok)\n\t{\n\t\tconst unsigned char* p = prof->palette;\n\t\tlodepng_palette_clear(mode_out); /*remove potential earlier palette*/\n\t\tfor (i = 0; i != prof->numcolors; ++i)\n\t\t{\n\t\t\terror = lodepng_palette_add(mode_out, p[i * 4 + 0], p[i * 4 + 1], p[i * 4 + 2], p[i * 4 + 3]);\n\t\t\tif (error) break;\n\t\t}\n\n\t\tmode_out->colortype = LCT_PALETTE;\n\t\tmode_out->bitdepth = palettebits;\n\n\t\tif (mode_in->colortype == LCT_PALETTE && mode_in->palettesize >= mode_out->palettesize\n\t\t\t&& mode_in->bitdepth == mode_out->bitdepth)\n\t\t{\n\t\t\t/*If input should have same palette colors, keep original to preserve its order and prevent conversion*/\n\t\t\tlodepng_color_mode_cleanup(mode_out);\n\t\t\tlodepng_color_mode_copy(mode_out, mode_in);\n\t\t}\n\t}\n\telse /*8-bit or 16-bit per channel*/\n\t{\n\t\tmode_out->bitdepth = bits;\n\t\tmode_out->colortype = alpha ? (prof->colored ? LCT_RGBA : LCT_GREY_ALPHA)\n\t\t\t: (prof->colored ? LCT_RGB : LCT_GREY);\n\n\t\tif (key)\n\t\t{\n\t\t\tunsigned mask = (1u << mode_out->bitdepth) - 1u; /*profile always uses 16-bit, mask converts it*/\n\t\t\tmode_out->key_r = prof->key_r & mask;\n\t\t\tmode_out->key_g = prof->key_g & mask;\n\t\t\tmode_out->key_b = prof->key_b & mask;\n\t\t\tmode_out->key_defined = 1;\n\t\t}\n\t}\n\n\treturn error;\n}\n\n/*Automatically chooses color type that gives smallest amount of bits in the\noutput image, e.g. grey if there are only greyscale pixels, palette if there\nare less than 256 colors, color key if only single transparent color, ...\nUpdates values of mode with a potentially smaller color model. mode_out should\ncontain the user chosen color model, but will be overwritten with the new chosen one.*/\nunsigned lodepng_auto_choose_color(LodePNGColorMode* mode_out,\n\tconst unsigned char* image, unsigned w, unsigned h,\n\tconst LodePNGColorMode* mode_in)\n{\n\tunsigned error = 0;\n\tLodePNGColorProfile prof;\n\tlodepng_color_profile_init(&prof);\n\terror = lodepng_get_color_profile(&prof, image, w, h, mode_in);\n\tif (error) return error;\n\treturn auto_choose_color_from_profile(mode_out, mode_in, &prof);\n}\n\n#endif /* #ifdef LODEPNG_COMPILE_ENCODER */\n\n/*\nPaeth predicter, used by PNG filter type 4\nThe parameters are of type short, but should come from unsigned chars, the shorts\nare only needed to make the paeth calculation correct.\n*/\nstatic unsigned char paethPredictor(short a, short b, short c)\n{\n\tshort pa = abs(b - c);\n\tshort pb = abs(a - c);\n\tshort pc = abs(a + b - c - c);\n\n\tif (pc < pa && pc < pb) return (unsigned char)c;\n\telse if (pb < pa) return (unsigned char)b;\n\telse return (unsigned char)a;\n}\n\n/*shared values used by multiple Adam7 related functions*/\n\nstatic const unsigned ADAM7_IX[7] = { 0, 4, 0, 2, 0, 1, 0 }; /*x start values*/\nstatic const unsigned ADAM7_IY[7] = { 0, 0, 4, 0, 2, 0, 1 }; /*y start values*/\nstatic const unsigned ADAM7_DX[7] = { 8, 8, 4, 4, 2, 2, 1 }; /*x delta values*/\nstatic const unsigned ADAM7_DY[7] = { 8, 8, 8, 4, 4, 2, 2 }; /*y delta values*/\n\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t /*\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t Outputs various dimensions and positions in the image related to the Adam7 reduced images.\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t passw: output containing the width of the 7 passes\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t passh: output containing the height of the 7 passes\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t filter_passstart: output containing the index of the start and end of each\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t reduced image with filter bytes\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t padded_passstart output containing the index of the start and end of each\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t reduced image when without filter bytes but with padded scanlines\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t passstart: output containing the index of the start and end of each reduced\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t image without padding between scanlines, but still padding between the images\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t w, h: width and height of non-interlaced image\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t bpp: bits per pixel\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t \"padded\" is only relevant if bpp is less than 8 and a scanline or image does not\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t end at a full byte\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t */\nstatic void Adam7_getpassvalues(unsigned passw[7], unsigned passh[7], size_t filter_passstart[8],\n\tsize_t padded_passstart[8], size_t passstart[8], unsigned w, unsigned h, unsigned bpp)\n{\n\t/*the passstart values have 8 values: the 8th one indicates the byte after the end of the 7th (= last) pass*/\n\tunsigned i;\n\n\t/*calculate width and height in pixels of each pass*/\n\tfor (i = 0; i != 7; ++i)\n\t{\n\t\tpassw[i] = (w + ADAM7_DX[i] - ADAM7_IX[i] - 1) / ADAM7_DX[i];\n\t\tpassh[i] = (h + ADAM7_DY[i] - ADAM7_IY[i] - 1) / ADAM7_DY[i];\n\t\tif (passw[i] == 0) passh[i] = 0;\n\t\tif (passh[i] == 0) passw[i] = 0;\n\t}\n\n\tfilter_passstart[0] = padded_passstart[0] = passstart[0] = 0;\n\tfor (i = 0; i != 7; ++i)\n\t{\n\t\t/*if passw[i] is 0, it's 0 bytes, not 1 (no filtertype-byte)*/\n\t\tfilter_passstart[i + 1] = filter_passstart[i]\n\t\t\t+ ((passw[i] && passh[i]) ? passh[i] * (1 + (passw[i] * bpp + 7) / 8) : 0);\n\t\t/*bits padded if needed to fill full byte at end of each scanline*/\n\t\tpadded_passstart[i + 1] = padded_passstart[i] + passh[i] * ((passw[i] * bpp + 7) / 8);\n\t\t/*only padded at end of reduced image*/\n\t\tpassstart[i + 1] = passstart[i] + (passh[i] * passw[i] * bpp + 7) / 8;\n\t}\n}\n\n#ifdef LODEPNG_COMPILE_DECODER\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* / PNG Decoder                                                            / */\n/* ////////////////////////////////////////////////////////////////////////// */\n\n/*read the information from the header and store it in the LodePNGInfo. return value is error*/\nunsigned lodepng_inspect(unsigned* w, unsigned* h, LodePNGState* state,\n\tconst unsigned char* in, size_t insize)\n{\n\tLodePNGInfo* info = &state->info_png;\n\tif (insize == 0 || in == 0)\n\t{\n\t\tCERROR_RETURN_ERROR(state->error, 48); /*error: the given data is empty*/\n\t}\n\tif (insize < 33)\n\t{\n\t\tCERROR_RETURN_ERROR(state->error, 27); /*error: the data length is smaller than the length of a PNG header*/\n\t}\n\n\t/*when decoding a new PNG image, make sure all parameters created after previous decoding are reset*/\n\tlodepng_info_cleanup(info);\n\tlodepng_info_init(info);\n\n\tif (in[0] != 137 || in[1] != 80 || in[2] != 78 || in[3] != 71\n\t\t|| in[4] != 13 || in[5] != 10 || in[6] != 26 || in[7] != 10)\n\t{\n\t\tCERROR_RETURN_ERROR(state->error, 28); /*error: the first 8 bytes are not the correct PNG signature*/\n\t}\n\tif (lodepng_chunk_length(in + 8) != 13)\n\t{\n\t\tCERROR_RETURN_ERROR(state->error, 94); /*error: header size must be 13 bytes*/\n\t}\n\tif (!lodepng_chunk_type_equals(in + 8, \"IHDR\"))\n\t{\n\t\tCERROR_RETURN_ERROR(state->error, 29); /*error: it doesn't start with a IHDR chunk!*/\n\t}\n\n\t/*read the values given in the header*/\n\t*w = lodepng_read32bitInt(&in[16]);\n\t*h = lodepng_read32bitInt(&in[20]);\n\tinfo->color.bitdepth = in[24];\n\tinfo->color.colortype = (LodePNGColorType)in[25];\n\tinfo->compression_method = in[26];\n\tinfo->filter_method = in[27];\n\tinfo->interlace_method = in[28];\n\n\tif (*w == 0 || *h == 0)\n\t{\n\t\tCERROR_RETURN_ERROR(state->error, 93);\n\t}\n\n\tif (!state->decoder.ignore_crc)\n\t{\n\t\tunsigned CRC = lodepng_read32bitInt(&in[29]);\n\t\tunsigned checksum = lodepng_crc32(&in[12], 17);\n\t\tif (CRC != checksum)\n\t\t{\n\t\t\tCERROR_RETURN_ERROR(state->error, 57); /*invalid CRC*/\n\t\t}\n\t}\n\n\t/*error: only compression method 0 is allowed in the specification*/\n\tif (info->compression_method != 0) CERROR_RETURN_ERROR(state->error, 32);\n\t/*error: only filter method 0 is allowed in the specification*/\n\tif (info->filter_method != 0) CERROR_RETURN_ERROR(state->error, 33);\n\t/*error: only interlace methods 0 and 1 exist in the specification*/\n\tif (info->interlace_method > 1) CERROR_RETURN_ERROR(state->error, 34);\n\n\tstate->error = checkColorValidity(info->color.colortype, info->color.bitdepth);\n\treturn state->error;\n}\n\nstatic unsigned unfilterScanline(unsigned char* recon, const unsigned char* scanline, const unsigned char* precon,\n\tsize_t bytewidth, unsigned char filterType, size_t length)\n{\n\t/*\n\tFor PNG filter method 0\n\tunfilter a PNG image scanline by scanline. when the pixels are smaller than 1 byte,\n\tthe filter works byte per byte (bytewidth = 1)\n\tprecon is the previous unfiltered scanline, recon the result, scanline the current one\n\tthe incoming scanlines do NOT include the filtertype byte, that one is given in the parameter filterType instead\n\trecon and scanline MAY be the same memory address! precon must be disjoint.\n\t*/\n\n\tsize_t i;\n\tswitch (filterType)\n\t{\n\tcase 0:\n\t\tfor (i = 0; i != length; ++i) recon[i] = scanline[i];\n\t\tbreak;\n\tcase 1:\n\t\tfor (i = 0; i != bytewidth; ++i) recon[i] = scanline[i];\n\t\tfor (i = bytewidth; i < length; ++i) recon[i] = scanline[i] + recon[i - bytewidth];\n\t\tbreak;\n\tcase 2:\n\t\tif (precon)\n\t\t{\n\t\t\tfor (i = 0; i != length; ++i) recon[i] = scanline[i] + precon[i];\n\t\t}\n\t\telse\n\t\t{\n\t\t\tfor (i = 0; i != length; ++i) recon[i] = scanline[i];\n\t\t}\n\t\tbreak;\n\tcase 3:\n\t\tif (precon)\n\t\t{\n\t\t\tfor (i = 0; i != bytewidth; ++i) recon[i] = scanline[i] + (precon[i] >> 1);\n\t\t\tfor (i = bytewidth; i < length; ++i) recon[i] = scanline[i] + ((recon[i - bytewidth] + precon[i]) >> 1);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tfor (i = 0; i != bytewidth; ++i) recon[i] = scanline[i];\n\t\t\tfor (i = bytewidth; i < length; ++i) recon[i] = scanline[i] + (recon[i - bytewidth] >> 1);\n\t\t}\n\t\tbreak;\n\tcase 4:\n\t\tif (precon)\n\t\t{\n\t\t\tfor (i = 0; i != bytewidth; ++i)\n\t\t\t{\n\t\t\t\trecon[i] = (scanline[i] + precon[i]); /*paethPredictor(0, precon[i], 0) is always precon[i]*/\n\t\t\t}\n\t\t\tfor (i = bytewidth; i < length; ++i)\n\t\t\t{\n\t\t\t\trecon[i] = (scanline[i] + paethPredictor(recon[i - bytewidth], precon[i], precon[i - bytewidth]));\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tfor (i = 0; i != bytewidth; ++i)\n\t\t\t{\n\t\t\t\trecon[i] = scanline[i];\n\t\t\t}\n\t\t\tfor (i = bytewidth; i < length; ++i)\n\t\t\t{\n\t\t\t\t/*paethPredictor(recon[i - bytewidth], 0, 0) is always recon[i - bytewidth]*/\n\t\t\t\trecon[i] = (scanline[i] + recon[i - bytewidth]);\n\t\t\t}\n\t\t}\n\t\tbreak;\n\tdefault: return 36; /*error: unexisting filter type given*/\n\t}\n\treturn 0;\n}\n\nstatic unsigned unfilter(unsigned char* out, const unsigned char* in, unsigned w, unsigned h, unsigned bpp)\n{\n\t/*\n\tFor PNG filter method 0\n\tthis function unfilters a single image (e.g. without interlacing this is called once, with Adam7 seven times)\n\tout must have enough bytes allocated already, in must have the scanlines + 1 filtertype byte per scanline\n\tw and h are image dimensions or dimensions of reduced image, bpp is bits per pixel\n\tin and out are allowed to be the same memory address (but aren't the same size since in has the extra filter bytes)\n\t*/\n\n\tunsigned y;\n\tunsigned char* prevline = 0;\n\n\t/*bytewidth is used for filtering, is 1 when bpp < 8, number of bytes per pixel otherwise*/\n\tsize_t bytewidth = (bpp + 7) / 8;\n\tsize_t linebytes = (w * bpp + 7) / 8;\n\n\tfor (y = 0; y < h; ++y)\n\t{\n\t\tsize_t outindex = linebytes * y;\n\t\tsize_t inindex = (1 + linebytes) * y; /*the extra filterbyte added to each row*/\n\t\tunsigned char filterType = in[inindex];\n\n\t\tCERROR_TRY_RETURN(unfilterScanline(&out[outindex], &in[inindex + 1], prevline, bytewidth, filterType, linebytes));\n\n\t\tprevline = &out[outindex];\n\t}\n\n\treturn 0;\n}\n\n/*\nin: Adam7 interlaced image, with no padding bits between scanlines, but between\nreduced images so that each reduced image starts at a byte.\nout: the same pixels, but re-ordered so that they're now a non-interlaced image with size w*h\nbpp: bits per pixel\nout has the following size in bits: w * h * bpp.\nin is possibly bigger due to padding bits between reduced images.\nout must be big enough AND must be 0 everywhere if bpp < 8 in the current implementation\n(because that's likely a little bit faster)\nNOTE: comments about padding bits are only relevant if bpp < 8\n*/\nstatic void Adam7_deinterlace(unsigned char* out, const unsigned char* in, unsigned w, unsigned h, unsigned bpp)\n{\n\tunsigned passw[7], passh[7];\n\tsize_t filter_passstart[8], padded_passstart[8], passstart[8];\n\tunsigned i;\n\n\tAdam7_getpassvalues(passw, passh, filter_passstart, padded_passstart, passstart, w, h, bpp);\n\n\tif (bpp >= 8)\n\t{\n\t\tfor (i = 0; i != 7; ++i)\n\t\t{\n\t\t\tunsigned x, y, b;\n\t\t\tsize_t bytewidth = bpp / 8;\n\t\t\tfor (y = 0; y < passh[i]; ++y)\n\t\t\t\tfor (x = 0; x < passw[i]; ++x)\n\t\t\t\t{\n\t\t\t\t\tsize_t pixelinstart = passstart[i] + (y * passw[i] + x) * bytewidth;\n\t\t\t\t\tsize_t pixeloutstart = ((ADAM7_IY[i] + y * ADAM7_DY[i]) * w + ADAM7_IX[i] + x * ADAM7_DX[i]) * bytewidth;\n\t\t\t\t\tfor (b = 0; b < bytewidth; ++b)\n\t\t\t\t\t{\n\t\t\t\t\t\tout[pixeloutstart + b] = in[pixelinstart + b];\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t}\n\t}\n\telse /*bpp < 8: Adam7 with pixels < 8 bit is a bit trickier: with bit pointers*/\n\t{\n\t\tfor (i = 0; i != 7; ++i)\n\t\t{\n\t\t\tunsigned x, y, b;\n\t\t\tunsigned ilinebits = bpp * passw[i];\n\t\t\tunsigned olinebits = bpp * w;\n\t\t\tsize_t obp, ibp; /*bit pointers (for out and in buffer)*/\n\t\t\tfor (y = 0; y < passh[i]; ++y)\n\t\t\t\tfor (x = 0; x < passw[i]; ++x)\n\t\t\t\t{\n\t\t\t\t\tibp = (8 * passstart[i]) + (y * ilinebits + x * bpp);\n\t\t\t\t\tobp = (ADAM7_IY[i] + y * ADAM7_DY[i]) * olinebits + (ADAM7_IX[i] + x * ADAM7_DX[i]) * bpp;\n\t\t\t\t\tfor (b = 0; b < bpp; ++b)\n\t\t\t\t\t{\n\t\t\t\t\t\tunsigned char bit = readBitFromReversedStream(&ibp, in);\n\t\t\t\t\t\t/*note that this function assumes the out buffer is completely 0, use setBitOfReversedStream otherwise*/\n\t\t\t\t\t\tsetBitOfReversedStream0(&obp, out, bit);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t}\n\t}\n}\n\nstatic void removePaddingBits(unsigned char* out, const unsigned char* in,\n\tsize_t olinebits, size_t ilinebits, unsigned h)\n{\n\t/*\n\tAfter filtering there are still padding bits if scanlines have non multiple of 8 bit amounts. They need\n\tto be removed (except at last scanline of (Adam7-reduced) image) before working with pure image buffers\n\tfor the Adam7 code, the color convert code and the output to the user.\n\tin and out are allowed to be the same buffer, in may also be higher but still overlapping; in must\n\thave >= ilinebits*h bits, out must have >= olinebits*h bits, olinebits must be <= ilinebits\n\talso used to move bits after earlier such operations happened, e.g. in a sequence of reduced images from Adam7\n\tonly useful if (ilinebits - olinebits) is a value in the range 1..7\n\t*/\n\tunsigned y;\n\tsize_t diff = ilinebits - olinebits;\n\tsize_t ibp = 0, obp = 0; /*input and output bit pointers*/\n\tfor (y = 0; y < h; ++y)\n\t{\n\t\tsize_t x;\n\t\tfor (x = 0; x < olinebits; ++x)\n\t\t{\n\t\t\tunsigned char bit = readBitFromReversedStream(&ibp, in);\n\t\t\tsetBitOfReversedStream(&obp, out, bit);\n\t\t}\n\t\tibp += diff;\n\t}\n}\n\n/*out must be buffer big enough to contain full image, and in must contain the full decompressed data from\nthe IDAT chunks (with filter index bytes and possible padding bits)\nreturn value is error*/\nstatic unsigned postProcessScanlines(unsigned char* out, unsigned char* in,\n\tunsigned w, unsigned h, const LodePNGInfo* info_png)\n{\n\t/*\n\tThis function converts the filtered-padded-interlaced data into pure 2D image buffer with the PNG's colortype.\n\tSteps:\n\t*) if no Adam7: 1) unfilter 2) remove padding bits (= posible extra bits per scanline if bpp < 8)\n\t*) if adam7: 1) 7x unfilter 2) 7x remove padding bits 3) Adam7_deinterlace\n\tNOTE: the in buffer will be overwritten with intermediate data!\n\t*/\n\tunsigned bpp = lodepng_get_bpp(&info_png->color);\n\tif (bpp == 0) return 31; /*error: invalid colortype*/\n\n\tif (info_png->interlace_method == 0)\n\t{\n\t\tif (bpp < 8 && w * bpp != ((w * bpp + 7) / 8) * 8)\n\t\t{\n\t\t\tCERROR_TRY_RETURN(unfilter(in, in, w, h, bpp));\n\t\t\tremovePaddingBits(out, in, w * bpp, ((w * bpp + 7) / 8) * 8, h);\n\t\t}\n\t\t/*we can immediately filter into the out buffer, no other steps needed*/\n\t\telse CERROR_TRY_RETURN(unfilter(out, in, w, h, bpp));\n\t}\n\telse /*interlace_method is 1 (Adam7)*/\n\t{\n\t\tunsigned passw[7], passh[7]; size_t filter_passstart[8], padded_passstart[8], passstart[8];\n\t\tunsigned i;\n\n\t\tAdam7_getpassvalues(passw, passh, filter_passstart, padded_passstart, passstart, w, h, bpp);\n\n\t\tfor (i = 0; i != 7; ++i)\n\t\t{\n\t\t\tCERROR_TRY_RETURN(unfilter(&in[padded_passstart[i]], &in[filter_passstart[i]], passw[i], passh[i], bpp));\n\t\t\t/*TODO: possible efficiency improvement: if in this reduced image the bits fit nicely in 1 scanline,\n\t\t\tmove bytes instead of bits or move not at all*/\n\t\t\tif (bpp < 8)\n\t\t\t{\n\t\t\t\t/*remove padding bits in scanlines; after this there still may be padding\n\t\t\t\tbits between the different reduced images: each reduced image still starts nicely at a byte*/\n\t\t\t\tremovePaddingBits(&in[passstart[i]], &in[padded_passstart[i]], passw[i] * bpp,\n\t\t\t\t\t((passw[i] * bpp + 7) / 8) * 8, passh[i]);\n\t\t\t}\n\t\t}\n\n\t\tAdam7_deinterlace(out, in, w, h, bpp);\n\t}\n\n\treturn 0;\n}\n\nstatic unsigned readChunk_PLTE(LodePNGColorMode* color, const unsigned char* data, size_t chunkLength)\n{\n\tunsigned pos = 0, i;\n\tif (color->palette) lodepng_free(color->palette);\n\tcolor->palettesize = chunkLength / 3;\n\tcolor->palette = (unsigned char*)lodepng_malloc(4 * color->palettesize);\n\tif (!color->palette && color->palettesize)\n\t{\n\t\tcolor->palettesize = 0;\n\t\treturn 83; /*alloc fail*/\n\t}\n\tif (color->palettesize > 256) return 38; /*error: palette too big*/\n\n\tfor (i = 0; i != color->palettesize; ++i)\n\t{\n\t\tcolor->palette[4 * i + 0] = data[pos++]; /*R*/\n\t\tcolor->palette[4 * i + 1] = data[pos++]; /*G*/\n\t\tcolor->palette[4 * i + 2] = data[pos++]; /*B*/\n\t\tcolor->palette[4 * i + 3] = 255; /*alpha*/\n\t}\n\n\treturn 0; /* OK */\n}\n\nstatic unsigned readChunk_tRNS(LodePNGColorMode* color, const unsigned char* data, size_t chunkLength)\n{\n\tunsigned i;\n\tif (color->colortype == LCT_PALETTE)\n\t{\n\t\t/*error: more alpha values given than there are palette entries*/\n\t\tif (chunkLength > color->palettesize) return 39;\n\n\t\tfor (i = 0; i != chunkLength; ++i) color->palette[4 * i + 3] = data[i];\n\t}\n\telse if (color->colortype == LCT_GREY)\n\t{\n\t\t/*error: this chunk must be 2 bytes for greyscale image*/\n\t\tif (chunkLength != 2) return 30;\n\n\t\tcolor->key_defined = 1;\n\t\tcolor->key_r = color->key_g = color->key_b = 256u * data[0] + data[1];\n\t}\n\telse if (color->colortype == LCT_RGB)\n\t{\n\t\t/*error: this chunk must be 6 bytes for RGB image*/\n\t\tif (chunkLength != 6) return 41;\n\n\t\tcolor->key_defined = 1;\n\t\tcolor->key_r = 256u * data[0] + data[1];\n\t\tcolor->key_g = 256u * data[2] + data[3];\n\t\tcolor->key_b = 256u * data[4] + data[5];\n\t}\n\telse return 42; /*error: tRNS chunk not allowed for other color models*/\n\n\treturn 0; /* OK */\n}\n\n\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n/*background color chunk (bKGD)*/\nstatic unsigned readChunk_bKGD(LodePNGInfo* info, const unsigned char* data, size_t chunkLength)\n{\n\tif (info->color.colortype == LCT_PALETTE)\n\t{\n\t\t/*error: this chunk must be 1 byte for indexed color image*/\n\t\tif (chunkLength != 1) return 43;\n\n\t\t/*error: invalid palette index, or maybe this chunk appeared before PLTE*/\n\t\tif (data[0] >= info->color.palettesize) return 103;\n\n\t\tinfo->background_defined = 1;\n\t\tinfo->background_r = info->background_g = info->background_b = data[0];\n\t}\n\telse if (info->color.colortype == LCT_GREY || info->color.colortype == LCT_GREY_ALPHA)\n\t{\n\t\t/*error: this chunk must be 2 bytes for greyscale image*/\n\t\tif (chunkLength != 2) return 44;\n\n\t\t/*the values are truncated to bitdepth in the PNG file*/\n\t\tinfo->background_defined = 1;\n\t\tinfo->background_r = info->background_g = info->background_b = 256u * data[0] + data[1];\n\t}\n\telse if (info->color.colortype == LCT_RGB || info->color.colortype == LCT_RGBA)\n\t{\n\t\t/*error: this chunk must be 6 bytes for greyscale image*/\n\t\tif (chunkLength != 6) return 45;\n\n\t\t/*the values are truncated to bitdepth in the PNG file*/\n\t\tinfo->background_defined = 1;\n\t\tinfo->background_r = 256u * data[0] + data[1];\n\t\tinfo->background_g = 256u * data[2] + data[3];\n\t\tinfo->background_b = 256u * data[4] + data[5];\n\t}\n\n\treturn 0; /* OK */\n}\n\n/*text chunk (tEXt)*/\nstatic unsigned readChunk_tEXt(LodePNGInfo* info, const unsigned char* data, size_t chunkLength)\n{\n\tunsigned error = 0;\n\tchar *key = 0, *str = 0;\n\tunsigned i;\n\n\twhile (!error) /*not really a while loop, only used to break on error*/\n\t{\n\t\tunsigned length, string2_begin;\n\n\t\tlength = 0;\n\t\twhile (length < chunkLength && data[length] != 0) ++length;\n\t\t/*even though it's not allowed by the standard, no error is thrown if\n\t\tthere's no null termination char, if the text is empty*/\n\t\tif (length < 1 || length > 79) CERROR_BREAK(error, 89); /*keyword too short or long*/\n\n\t\tkey = (char*)lodepng_malloc(length + 1);\n\t\tif (!key) CERROR_BREAK(error, 83); /*alloc fail*/\n\n\t\tkey[length] = 0;\n\t\tfor (i = 0; i != length; ++i) key[i] = (char)data[i];\n\n\t\tstring2_begin = length + 1; /*skip keyword null terminator*/\n\n\t\tlength = (unsigned)(chunkLength < string2_begin ? 0 : chunkLength - string2_begin);\n\t\tstr = (char*)lodepng_malloc(length + 1);\n\t\tif (!str) CERROR_BREAK(error, 83); /*alloc fail*/\n\n\t\tstr[length] = 0;\n\t\tfor (i = 0; i != length; ++i) str[i] = (char)data[string2_begin + i];\n\n\t\terror = lodepng_add_text(info, key, str);\n\n\t\tbreak;\n\t}\n\n\tlodepng_free(key);\n\tlodepng_free(str);\n\n\treturn error;\n}\n\n/*compressed text chunk (zTXt)*/\nstatic unsigned readChunk_zTXt(LodePNGInfo* info, const LodePNGDecompressSettings* zlibsettings,\n\tconst unsigned char* data, size_t chunkLength)\n{\n\tunsigned error = 0;\n\tunsigned i;\n\n\tunsigned length, string2_begin;\n\tchar *key = 0;\n\tucvector decoded;\n\n\tucvector_init(&decoded);\n\n\twhile (!error) /*not really a while loop, only used to break on error*/\n\t{\n\t\tfor (length = 0; length < chunkLength && data[length] != 0; ++length);\n\t\tif (length + 2 >= chunkLength) CERROR_BREAK(error, 75); /*no null termination, corrupt?*/\n\t\tif (length < 1 || length > 79) CERROR_BREAK(error, 89); /*keyword too short or long*/\n\n\t\tkey = (char*)lodepng_malloc(length + 1);\n\t\tif (!key) CERROR_BREAK(error, 83); /*alloc fail*/\n\n\t\tkey[length] = 0;\n\t\tfor (i = 0; i != length; ++i) key[i] = (char)data[i];\n\n\t\tif (data[length + 1] != 0) CERROR_BREAK(error, 72); /*the 0 byte indicating compression must be 0*/\n\n\t\tstring2_begin = length + 2;\n\t\tif (string2_begin > chunkLength) CERROR_BREAK(error, 75); /*no null termination, corrupt?*/\n\n\t\tlength = (unsigned)chunkLength - string2_begin;\n\t\t/*will fail if zlib error, e.g. if length is too small*/\n\t\terror = zlib_decompress(&decoded.data, &decoded.size,\n\t\t\t(unsigned char*)(&data[string2_begin]),\n\t\t\tlength, zlibsettings);\n\t\tif (error) break;\n\t\tucvector_push_back(&decoded, 0);\n\n\t\terror = lodepng_add_text(info, key, (char*)decoded.data);\n\n\t\tbreak;\n\t}\n\n\tlodepng_free(key);\n\tucvector_cleanup(&decoded);\n\n\treturn error;\n}\n\n/*international text chunk (iTXt)*/\nstatic unsigned readChunk_iTXt(LodePNGInfo* info, const LodePNGDecompressSettings* zlibsettings,\n\tconst unsigned char* data, size_t chunkLength)\n{\n\tunsigned error = 0;\n\tunsigned i;\n\n\tunsigned length, begin, compressed;\n\tchar *key = 0, *langtag = 0, *transkey = 0;\n\tucvector decoded;\n\tucvector_init(&decoded); /* TODO: only use in case of compressed text */\n\n\twhile (!error) /*not really a while loop, only used to break on error*/\n\t{\n\t\t/*Quick check if the chunk length isn't too small. Even without check\n\t\tit'd still fail with other error checks below if it's too short. This just gives a different error code.*/\n\t\tif (chunkLength < 5) CERROR_BREAK(error, 30); /*iTXt chunk too short*/\n\n\t\t\t\t\t\t\t\t\t\t\t\t\t  /*read the key*/\n\t\tfor (length = 0; length < chunkLength && data[length] != 0; ++length);\n\t\tif (length + 3 >= chunkLength) CERROR_BREAK(error, 75); /*no null termination char, corrupt?*/\n\t\tif (length < 1 || length > 79) CERROR_BREAK(error, 89); /*keyword too short or long*/\n\n\t\tkey = (char*)lodepng_malloc(length + 1);\n\t\tif (!key) CERROR_BREAK(error, 83); /*alloc fail*/\n\n\t\tkey[length] = 0;\n\t\tfor (i = 0; i != length; ++i) key[i] = (char)data[i];\n\n\t\t/*read the compression method*/\n\t\tcompressed = data[length + 1];\n\t\tif (data[length + 2] != 0) CERROR_BREAK(error, 72); /*the 0 byte indicating compression must be 0*/\n\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t/*even though it's not allowed by the standard, no error is thrown if\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tthere's no null termination char, if the text is empty for the next 3 texts*/\n\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t/*read the langtag*/\n\t\tbegin = length + 3;\n\t\tlength = 0;\n\t\tfor (i = begin; i < chunkLength && data[i] != 0; ++i) ++length;\n\n\t\tlangtag = (char*)lodepng_malloc(length + 1);\n\t\tif (!langtag) CERROR_BREAK(error, 83); /*alloc fail*/\n\n\t\tlangtag[length] = 0;\n\t\tfor (i = 0; i != length; ++i) langtag[i] = (char)data[begin + i];\n\n\t\t/*read the transkey*/\n\t\tbegin += length + 1;\n\t\tlength = 0;\n\t\tfor (i = begin; i < chunkLength && data[i] != 0; ++i) ++length;\n\n\t\ttranskey = (char*)lodepng_malloc(length + 1);\n\t\tif (!transkey) CERROR_BREAK(error, 83); /*alloc fail*/\n\n\t\ttranskey[length] = 0;\n\t\tfor (i = 0; i != length; ++i) transkey[i] = (char)data[begin + i];\n\n\t\t/*read the actual text*/\n\t\tbegin += length + 1;\n\n\t\tlength = (unsigned)chunkLength < begin ? 0 : (unsigned)chunkLength - begin;\n\n\t\tif (compressed)\n\t\t{\n\t\t\t/*will fail if zlib error, e.g. if length is too small*/\n\t\t\terror = zlib_decompress(&decoded.data, &decoded.size,\n\t\t\t\t(unsigned char*)(&data[begin]),\n\t\t\t\tlength, zlibsettings);\n\t\t\tif (error) break;\n\t\t\tif (decoded.allocsize < decoded.size) decoded.allocsize = decoded.size;\n\t\t\tucvector_push_back(&decoded, 0);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (!ucvector_resize(&decoded, length + 1)) CERROR_BREAK(error, 83 /*alloc fail*/);\n\n\t\t\tdecoded.data[length] = 0;\n\t\t\tfor (i = 0; i != length; ++i) decoded.data[i] = data[begin + i];\n\t\t}\n\n\t\terror = lodepng_add_itext(info, key, langtag, transkey, (char*)decoded.data);\n\n\t\tbreak;\n\t}\n\n\tlodepng_free(key);\n\tlodepng_free(langtag);\n\tlodepng_free(transkey);\n\tucvector_cleanup(&decoded);\n\n\treturn error;\n}\n\nstatic unsigned readChunk_tIME(LodePNGInfo* info, const unsigned char* data, size_t chunkLength)\n{\n\tif (chunkLength != 7) return 73; /*invalid tIME chunk size*/\n\n\tinfo->time_defined = 1;\n\tinfo->time.year = 256u * data[0] + data[1];\n\tinfo->time.month = data[2];\n\tinfo->time.day = data[3];\n\tinfo->time.hour = data[4];\n\tinfo->time.minute = data[5];\n\tinfo->time.second = data[6];\n\n\treturn 0; /* OK */\n}\n\nstatic unsigned readChunk_pHYs(LodePNGInfo* info, const unsigned char* data, size_t chunkLength)\n{\n\tif (chunkLength != 9) return 74; /*invalid pHYs chunk size*/\n\n\tinfo->phys_defined = 1;\n\tinfo->phys_x = 16777216u * data[0] + 65536u * data[1] + 256u * data[2] + data[3];\n\tinfo->phys_y = 16777216u * data[4] + 65536u * data[5] + 256u * data[6] + data[7];\n\tinfo->phys_unit = data[8];\n\n\treturn 0; /* OK */\n}\n\nstatic unsigned readChunk_gAMA(LodePNGInfo* info, const unsigned char* data, size_t chunkLength)\n{\n\tif (chunkLength != 4) return 96; /*invalid gAMA chunk size*/\n\n\tinfo->gama_defined = 1;\n\tinfo->gama_gamma = 16777216u * data[0] + 65536u * data[1] + 256u * data[2] + data[3];\n\n\treturn 0; /* OK */\n}\n\nstatic unsigned readChunk_cHRM(LodePNGInfo* info, const unsigned char* data, size_t chunkLength)\n{\n\tif (chunkLength != 32) return 97; /*invalid cHRM chunk size*/\n\n\tinfo->chrm_defined = 1;\n\tinfo->chrm_white_x = 16777216u * data[0] + 65536u * data[1] + 256u * data[2] + data[3];\n\tinfo->chrm_white_y = 16777216u * data[4] + 65536u * data[5] + 256u * data[6] + data[7];\n\tinfo->chrm_red_x = 16777216u * data[8] + 65536u * data[9] + 256u * data[10] + data[11];\n\tinfo->chrm_red_y = 16777216u * data[12] + 65536u * data[13] + 256u * data[14] + data[15];\n\tinfo->chrm_green_x = 16777216u * data[16] + 65536u * data[17] + 256u * data[18] + data[19];\n\tinfo->chrm_green_y = 16777216u * data[20] + 65536u * data[21] + 256u * data[22] + data[23];\n\tinfo->chrm_blue_x = 16777216u * data[24] + 65536u * data[25] + 256u * data[26] + data[27];\n\tinfo->chrm_blue_y = 16777216u * data[28] + 65536u * data[29] + 256u * data[30] + data[31];\n\n\treturn 0; /* OK */\n}\n\nstatic unsigned readChunk_sRGB(LodePNGInfo* info, const unsigned char* data, size_t chunkLength)\n{\n\tif (chunkLength != 1) return 98; /*invalid sRGB chunk size (this one is never ignored)*/\n\n\tinfo->srgb_defined = 1;\n\tinfo->srgb_intent = data[0];\n\n\treturn 0; /* OK */\n}\n\nstatic unsigned readChunk_iCCP(LodePNGInfo* info, const LodePNGDecompressSettings* zlibsettings,\n\tconst unsigned char* data, size_t chunkLength)\n{\n\tunsigned error = 0;\n\tunsigned i;\n\n\tunsigned length, string2_begin;\n\tucvector decoded;\n\n\tinfo->iccp_defined = 1;\n\tif (info->iccp_name) lodepng_clear_icc(info);\n\n\tfor (length = 0; length < chunkLength && data[length] != 0; ++length);\n\tif (length + 2 >= chunkLength) return 75; /*no null termination, corrupt?*/\n\tif (length < 1 || length > 79) return 89; /*keyword too short or long*/\n\n\tinfo->iccp_name = (char*)lodepng_malloc(length + 1);\n\tif (!info->iccp_name) return 83; /*alloc fail*/\n\n\tinfo->iccp_name[length] = 0;\n\tfor (i = 0; i != length; ++i) info->iccp_name[i] = (char)data[i];\n\n\tif (data[length + 1] != 0) return 72; /*the 0 byte indicating compression must be 0*/\n\n\tstring2_begin = length + 2;\n\tif (string2_begin > chunkLength) return 75; /*no null termination, corrupt?*/\n\n\tlength = (unsigned)chunkLength - string2_begin;\n\tucvector_init(&decoded);\n\terror = zlib_decompress(&decoded.data, &decoded.size,\n\t\t(unsigned char*)(&data[string2_begin]),\n\t\tlength, zlibsettings);\n\tif (!error) {\n\t\tinfo->iccp_profile_size = decoded.size;\n\t\tinfo->iccp_profile = (unsigned char*)lodepng_malloc(decoded.size);\n\t\tif (info->iccp_profile) {\n\t\t\tmemcpy(info->iccp_profile, decoded.data, decoded.size);\n\t\t}\n\t\telse {\n\t\t\terror = 83; /* alloc fail */\n\t\t}\n\t}\n\tucvector_cleanup(&decoded);\n\treturn error;\n}\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\n/*read a PNG, the result will be in the same color type as the PNG (hence \"generic\")*/\nstatic void decodeGeneric(unsigned char** out, unsigned* w, unsigned* h,\n\tLodePNGState* state,\n\tconst unsigned char* in, size_t insize)\n{\n\tunsigned char IEND = 0;\n\tconst unsigned char* chunk;\n\tsize_t i;\n\tucvector idat; /*the data from idat chunks*/\n\tucvector scanlines;\n\tsize_t predict;\n\tsize_t outsize = 0;\n\n\t/*for unknown chunk order*/\n\tunsigned unknown = 0;\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\tunsigned critical_pos = 1; /*1 = after IHDR, 2 = after PLTE, 3 = after IDAT*/\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\n\t\t\t\t\t\t\t   /*provide some proper output values if error will happen*/\n\t*out = 0;\n\n\tstate->error = lodepng_inspect(w, h, state, in, insize); /*reads header and resets other parameters in state->info_png*/\n\tif (state->error) return;\n\n\tif (lodepng_pixel_overflow(*w, *h, &state->info_png.color, &state->info_raw))\n\t{\n\t\tCERROR_RETURN(state->error, 92); /*overflow possible due to amount of pixels*/\n\t}\n\n\tucvector_init(&idat);\n\tchunk = &in[33]; /*first byte of the first chunk after the header*/\n\n\t\t\t\t\t /*loop through the chunks, ignoring unknown chunks and stopping at IEND chunk.\n\t\t\t\t\t IDAT data is put at the start of the in buffer*/\n\twhile (!IEND && !state->error)\n\t{\n\t\tunsigned chunkLength;\n\t\tconst unsigned char* data; /*the data in the chunk*/\n\n\t\t\t\t\t\t\t\t   /*error: size of the in buffer too small to contain next chunk*/\n\t\tif ((size_t)((chunk - in) + 12) > insize || chunk < in)\n\t\t{\n\t\t\tif (state->decoder.ignore_end) break; /*other errors may still happen though*/\n\t\t\tCERROR_BREAK(state->error, 30);\n\t\t}\n\n\t\t/*length of the data of the chunk, excluding the length bytes, chunk type and CRC bytes*/\n\t\tchunkLength = lodepng_chunk_length(chunk);\n\t\t/*error: chunk length larger than the max PNG chunk size*/\n\t\tif (chunkLength > 2147483647)\n\t\t{\n\t\t\tif (state->decoder.ignore_end) break; /*other errors may still happen though*/\n\t\t\tCERROR_BREAK(state->error, 63);\n\t\t}\n\n\t\tif ((size_t)((chunk - in) + chunkLength + 12) > insize || (chunk + chunkLength + 12) < in)\n\t\t{\n\t\t\tCERROR_BREAK(state->error, 64); /*error: size of the in buffer too small to contain next chunk*/\n\t\t}\n\n\t\tdata = lodepng_chunk_data_const(chunk);\n\n\t\tunknown = 0;\n\n\t\t/*IDAT chunk, containing compressed image data*/\n\t\tif (lodepng_chunk_type_equals(chunk, \"IDAT\"))\n\t\t{\n\t\t\tsize_t oldsize = idat.size;\n\t\t\tsize_t newsize;\n\t\t\tif (lodepng_addofl(oldsize, chunkLength, &newsize)) CERROR_BREAK(state->error, 95);\n\t\t\tif (!ucvector_resize(&idat, newsize)) CERROR_BREAK(state->error, 83 /*alloc fail*/);\n\t\t\tfor (i = 0; i != chunkLength; ++i) idat.data[oldsize + i] = data[i];\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\t\t\tcritical_pos = 3;\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\t\t}\n\t\t/*IEND chunk*/\n\t\telse if (lodepng_chunk_type_equals(chunk, \"IEND\"))\n\t\t{\n\t\t\tIEND = 1;\n\t\t}\n\t\t/*palette chunk (PLTE)*/\n\t\telse if (lodepng_chunk_type_equals(chunk, \"PLTE\"))\n\t\t{\n\t\t\tstate->error = readChunk_PLTE(&state->info_png.color, data, chunkLength);\n\t\t\tif (state->error) break;\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\t\t\tcritical_pos = 2;\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\t\t}\n\t\t/*palette transparency chunk (tRNS). Even though this one is an ancillary chunk , it is still compiled\n\t\tin without 'LODEPNG_COMPILE_ANCILLARY_CHUNKS' because it contains essential color information that\n\t\taffects the alpha channel of pixels. */\n\t\telse if (lodepng_chunk_type_equals(chunk, \"tRNS\"))\n\t\t{\n\t\t\tstate->error = readChunk_tRNS(&state->info_png.color, data, chunkLength);\n\t\t\tif (state->error) break;\n\t\t}\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\t\t/*background color chunk (bKGD)*/\n\t\telse if (lodepng_chunk_type_equals(chunk, \"bKGD\"))\n\t\t{\n\t\t\tstate->error = readChunk_bKGD(&state->info_png, data, chunkLength);\n\t\t\tif (state->error) break;\n\t\t}\n\t\t/*text chunk (tEXt)*/\n\t\telse if (lodepng_chunk_type_equals(chunk, \"tEXt\"))\n\t\t{\n\t\t\tif (state->decoder.read_text_chunks)\n\t\t\t{\n\t\t\t\tstate->error = readChunk_tEXt(&state->info_png, data, chunkLength);\n\t\t\t\tif (state->error) break;\n\t\t\t}\n\t\t}\n\t\t/*compressed text chunk (zTXt)*/\n\t\telse if (lodepng_chunk_type_equals(chunk, \"zTXt\"))\n\t\t{\n\t\t\tif (state->decoder.read_text_chunks)\n\t\t\t{\n\t\t\t\tstate->error = readChunk_zTXt(&state->info_png, &state->decoder.zlibsettings, data, chunkLength);\n\t\t\t\tif (state->error) break;\n\t\t\t}\n\t\t}\n\t\t/*international text chunk (iTXt)*/\n\t\telse if (lodepng_chunk_type_equals(chunk, \"iTXt\"))\n\t\t{\n\t\t\tif (state->decoder.read_text_chunks)\n\t\t\t{\n\t\t\t\tstate->error = readChunk_iTXt(&state->info_png, &state->decoder.zlibsettings, data, chunkLength);\n\t\t\t\tif (state->error) break;\n\t\t\t}\n\t\t}\n\t\telse if (lodepng_chunk_type_equals(chunk, \"tIME\"))\n\t\t{\n\t\t\tstate->error = readChunk_tIME(&state->info_png, data, chunkLength);\n\t\t\tif (state->error) break;\n\t\t}\n\t\telse if (lodepng_chunk_type_equals(chunk, \"pHYs\"))\n\t\t{\n\t\t\tstate->error = readChunk_pHYs(&state->info_png, data, chunkLength);\n\t\t\tif (state->error) break;\n\t\t}\n\t\telse if (lodepng_chunk_type_equals(chunk, \"gAMA\"))\n\t\t{\n\t\t\tstate->error = readChunk_gAMA(&state->info_png, data, chunkLength);\n\t\t\tif (state->error) break;\n\t\t}\n\t\telse if (lodepng_chunk_type_equals(chunk, \"cHRM\"))\n\t\t{\n\t\t\tstate->error = readChunk_cHRM(&state->info_png, data, chunkLength);\n\t\t\tif (state->error) break;\n\t\t}\n\t\telse if (lodepng_chunk_type_equals(chunk, \"sRGB\"))\n\t\t{\n\t\t\tstate->error = readChunk_sRGB(&state->info_png, data, chunkLength);\n\t\t\tif (state->error) break;\n\t\t}\n\t\telse if (lodepng_chunk_type_equals(chunk, \"iCCP\"))\n\t\t{\n\t\t\tstate->error = readChunk_iCCP(&state->info_png, &state->decoder.zlibsettings, data, chunkLength);\n\t\t\tif (state->error) break;\n\t\t}\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\t\telse /*it's not an implemented chunk type, so ignore it: skip over the data*/\n\t\t{\n\t\t\t/*error: unknown critical chunk (5th bit of first byte of chunk type is 0)*/\n\t\t\tif (!state->decoder.ignore_critical && !lodepng_chunk_ancillary(chunk))\n\t\t\t{\n\t\t\t\tCERROR_BREAK(state->error, 69);\n\t\t\t}\n\n\t\t\tunknown = 1;\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\t\t\tif (state->decoder.remember_unknown_chunks)\n\t\t\t{\n\t\t\t\tstate->error = lodepng_chunk_append(&state->info_png.unknown_chunks_data[critical_pos - 1],\n\t\t\t\t\t&state->info_png.unknown_chunks_size[critical_pos - 1], chunk);\n\t\t\t\tif (state->error) break;\n\t\t\t}\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\t\t}\n\n\t\tif (!state->decoder.ignore_crc && !unknown) /*check CRC if wanted, only on known chunk types*/\n\t\t{\n\t\t\tif (lodepng_chunk_check_crc(chunk)) CERROR_BREAK(state->error, 57); /*invalid CRC*/\n\t\t}\n\n\t\tif (!IEND) chunk = lodepng_chunk_next_const(chunk);\n\t}\n\n\tucvector_init(&scanlines);\n\t/*predict output size, to allocate exact size for output buffer to avoid more dynamic allocation.\n\tIf the decompressed size does not match the prediction, the image must be corrupt.*/\n\tif (state->info_png.interlace_method == 0)\n\t{\n\t\tpredict = lodepng_get_raw_size_idat(*w, *h, &state->info_png.color);\n\t}\n\telse\n\t{\n\t\t/*Adam-7 interlaced: predicted size is the sum of the 7 sub-images sizes*/\n\t\tconst LodePNGColorMode* color = &state->info_png.color;\n\t\tpredict = 0;\n\t\tpredict += lodepng_get_raw_size_idat((*w + 7) >> 3, (*h + 7) >> 3, color);\n\t\tif (*w > 4) predict += lodepng_get_raw_size_idat((*w + 3) >> 3, (*h + 7) >> 3, color);\n\t\tpredict += lodepng_get_raw_size_idat((*w + 3) >> 2, (*h + 3) >> 3, color);\n\t\tif (*w > 2) predict += lodepng_get_raw_size_idat((*w + 1) >> 2, (*h + 3) >> 2, color);\n\t\tpredict += lodepng_get_raw_size_idat((*w + 1) >> 1, (*h + 1) >> 2, color);\n\t\tif (*w > 1) predict += lodepng_get_raw_size_idat((*w + 0) >> 1, (*h + 1) >> 1, color);\n\t\tpredict += lodepng_get_raw_size_idat((*w + 0), (*h + 0) >> 1, color);\n\t}\n\tif (!state->error && !ucvector_reserve(&scanlines, predict)) state->error = 83; /*alloc fail*/\n\tif (!state->error)\n\t{\n\t\tstate->error = zlib_decompress(&scanlines.data, &scanlines.size, idat.data,\n\t\t\tidat.size, &state->decoder.zlibsettings);\n\t\tif (!state->error && scanlines.size != predict) state->error = 91; /*decompressed size doesn't match prediction*/\n\t}\n\tucvector_cleanup(&idat);\n\n\tif (!state->error)\n\t{\n\t\toutsize = lodepng_get_raw_size(*w, *h, &state->info_png.color);\n\t\t*out = (unsigned char*)lodepng_malloc(outsize);\n\t\tif (!*out) state->error = 83; /*alloc fail*/\n\t}\n\tif (!state->error)\n\t{\n\t\tfor (i = 0; i < outsize; i++) (*out)[i] = 0;\n\t\tstate->error = postProcessScanlines(*out, scanlines.data, *w, *h, &state->info_png);\n\t}\n\tucvector_cleanup(&scanlines);\n}\n\nunsigned lodepng_decode(unsigned char** out, unsigned* w, unsigned* h,\n\tLodePNGState* state,\n\tconst unsigned char* in, size_t insize)\n{\n\t*out = 0;\n\tdecodeGeneric(out, w, h, state, in, insize);\n\tif (state->error) return state->error;\n\tif (!state->decoder.color_convert || lodepng_color_mode_equal(&state->info_raw, &state->info_png.color))\n\t{\n\t\t/*same color type, no copying or converting of data needed*/\n\t\t/*store the info_png color settings on the info_raw so that the info_raw still reflects what colortype\n\t\tthe raw image has to the end user*/\n\t\tif (!state->decoder.color_convert)\n\t\t{\n\t\t\tstate->error = lodepng_color_mode_copy(&state->info_raw, &state->info_png.color);\n\t\t\tif (state->error) return state->error;\n\t\t}\n\t}\n\telse\n\t{\n\t\t/*color conversion needed; sort of copy of the data*/\n\t\tunsigned char* data = *out;\n\t\tsize_t outsize;\n\n\t\t/*TODO: check if this works according to the statement in the documentation: \"The converter can convert\n\t\tfrom greyscale input color type, to 8-bit greyscale or greyscale with alpha\"*/\n\t\tif (!(state->info_raw.colortype == LCT_RGB || state->info_raw.colortype == LCT_RGBA)\n\t\t\t&& !(state->info_raw.bitdepth == 8))\n\t\t{\n\t\t\treturn 56; /*unsupported color mode conversion*/\n\t\t}\n\n\t\toutsize = lodepng_get_raw_size(*w, *h, &state->info_raw);\n\t\t*out = (unsigned char*)lodepng_malloc(outsize);\n\t\tif (!(*out))\n\t\t{\n\t\t\tstate->error = 83; /*alloc fail*/\n\t\t}\n\t\telse state->error = lodepng_convert(*out, data, &state->info_raw,\n\t\t\t&state->info_png.color, *w, *h);\n\t\tlodepng_free(data);\n\t}\n\treturn state->error;\n}\n\nunsigned lodepng_decode_memory(unsigned char** out, unsigned* w, unsigned* h, const unsigned char* in,\n\tsize_t insize, LodePNGColorType colortype, unsigned bitdepth)\n{\n\tunsigned error;\n\tLodePNGState state;\n\tlodepng_state_init(&state);\n\tstate.info_raw.colortype = colortype;\n\tstate.info_raw.bitdepth = bitdepth;\n\terror = lodepng_decode(out, w, h, &state, in, insize);\n\tlodepng_state_cleanup(&state);\n\treturn error;\n}\n\nunsigned lodepng_decode32(unsigned char** out, unsigned* w, unsigned* h, const unsigned char* in, size_t insize)\n{\n\treturn lodepng_decode_memory(out, w, h, in, insize, LCT_RGBA, 8);\n}\n\nunsigned lodepng_decode24(unsigned char** out, unsigned* w, unsigned* h, const unsigned char* in, size_t insize)\n{\n\treturn lodepng_decode_memory(out, w, h, in, insize, LCT_RGB, 8);\n}\n\n#ifdef LODEPNG_COMPILE_DISK\nunsigned lodepng_decode_file(unsigned char** out, unsigned* w, unsigned* h, const char* filename,\n\tLodePNGColorType colortype, unsigned bitdepth)\n{\n\tunsigned char* buffer = 0;\n\tsize_t buffersize;\n\tunsigned error;\n\terror = lodepng_load_file(&buffer, &buffersize, filename);\n\tif (!error) error = lodepng_decode_memory(out, w, h, buffer, buffersize, colortype, bitdepth);\n\tlodepng_free(buffer);\n\treturn error;\n}\n\nunsigned lodepng_decode32_file(unsigned char** out, unsigned* w, unsigned* h, const char* filename)\n{\n\treturn lodepng_decode_file(out, w, h, filename, LCT_RGBA, 8);\n}\n\nunsigned lodepng_decode24_file(unsigned char** out, unsigned* w, unsigned* h, const char* filename)\n{\n\treturn lodepng_decode_file(out, w, h, filename, LCT_RGB, 8);\n}\n#endif /*LODEPNG_COMPILE_DISK*/\n\nvoid lodepng_decoder_settings_init(LodePNGDecoderSettings* settings)\n{\n\tsettings->color_convert = 1;\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\tsettings->read_text_chunks = 1;\n\tsettings->remember_unknown_chunks = 0;\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\tsettings->ignore_crc = 0;\n\tsettings->ignore_critical = 0;\n\tsettings->ignore_end = 0;\n\tlodepng_decompress_settings_init(&settings->zlibsettings);\n}\n\n#endif /*LODEPNG_COMPILE_DECODER*/\n\n#if defined(LODEPNG_COMPILE_DECODER) || defined(LODEPNG_COMPILE_ENCODER)\n\nvoid lodepng_state_init(LodePNGState* state)\n{\n#ifdef LODEPNG_COMPILE_DECODER\n\tlodepng_decoder_settings_init(&state->decoder);\n#endif /*LODEPNG_COMPILE_DECODER*/\n#ifdef LODEPNG_COMPILE_ENCODER\n\tlodepng_encoder_settings_init(&state->encoder);\n#endif /*LODEPNG_COMPILE_ENCODER*/\n\tlodepng_color_mode_init(&state->info_raw);\n\tlodepng_info_init(&state->info_png);\n\tstate->error = 1;\n}\n\nvoid lodepng_state_cleanup(LodePNGState* state)\n{\n\tlodepng_color_mode_cleanup(&state->info_raw);\n\tlodepng_info_cleanup(&state->info_png);\n}\n\nvoid lodepng_state_copy(LodePNGState* dest, const LodePNGState* source)\n{\n\tlodepng_state_cleanup(dest);\n\t*dest = *source;\n\tlodepng_color_mode_init(&dest->info_raw);\n\tlodepng_info_init(&dest->info_png);\n\tdest->error = lodepng_color_mode_copy(&dest->info_raw, &source->info_raw); if (dest->error) return;\n\tdest->error = lodepng_info_copy(&dest->info_png, &source->info_png); if (dest->error) return;\n}\n\n#endif /* defined(LODEPNG_COMPILE_DECODER) || defined(LODEPNG_COMPILE_ENCODER) */\n\n#ifdef LODEPNG_COMPILE_ENCODER\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* / PNG Encoder                                                            / */\n/* ////////////////////////////////////////////////////////////////////////// */\n\n/*chunkName must be string of 4 characters*/\nstatic unsigned addChunk(ucvector* out, const char* chunkName, const unsigned char* data, size_t length)\n{\n\tCERROR_TRY_RETURN(lodepng_chunk_create(&out->data, &out->size, (unsigned)length, chunkName, data));\n\tout->allocsize = out->size; /*fix the allocsize again*/\n\treturn 0;\n}\n\nstatic void writeSignature(ucvector* out)\n{\n\t/*8 bytes PNG signature, aka the magic bytes*/\n\tucvector_push_back(out, 137);\n\tucvector_push_back(out, 80);\n\tucvector_push_back(out, 78);\n\tucvector_push_back(out, 71);\n\tucvector_push_back(out, 13);\n\tucvector_push_back(out, 10);\n\tucvector_push_back(out, 26);\n\tucvector_push_back(out, 10);\n}\n\nstatic unsigned addChunk_IHDR(ucvector* out, unsigned w, unsigned h,\n\tLodePNGColorType colortype, unsigned bitdepth, unsigned interlace_method)\n{\n\tunsigned error = 0;\n\tucvector header;\n\tucvector_init(&header);\n\n\tlodepng_add32bitInt(&header, w); /*width*/\n\tlodepng_add32bitInt(&header, h); /*height*/\n\tucvector_push_back(&header, (unsigned char)bitdepth); /*bit depth*/\n\tucvector_push_back(&header, (unsigned char)colortype); /*color type*/\n\tucvector_push_back(&header, 0); /*compression method*/\n\tucvector_push_back(&header, 0); /*filter method*/\n\tucvector_push_back(&header, interlace_method); /*interlace method*/\n\n\terror = addChunk(out, \"IHDR\", header.data, header.size);\n\tucvector_cleanup(&header);\n\n\treturn error;\n}\n\nstatic unsigned addChunk_PLTE(ucvector* out, const LodePNGColorMode* info)\n{\n\tunsigned error = 0;\n\tsize_t i;\n\tucvector PLTE;\n\tucvector_init(&PLTE);\n\tfor (i = 0; i != info->palettesize * 4; ++i)\n\t{\n\t\t/*add all channels except alpha channel*/\n\t\tif (i % 4 != 3) ucvector_push_back(&PLTE, info->palette[i]);\n\t}\n\terror = addChunk(out, \"PLTE\", PLTE.data, PLTE.size);\n\tucvector_cleanup(&PLTE);\n\n\treturn error;\n}\n\nstatic unsigned addChunk_tRNS(ucvector* out, const LodePNGColorMode* info)\n{\n\tunsigned error = 0;\n\tsize_t i;\n\tucvector tRNS;\n\tucvector_init(&tRNS);\n\tif (info->colortype == LCT_PALETTE)\n\t{\n\t\tsize_t amount = info->palettesize;\n\t\t/*the tail of palette values that all have 255 as alpha, does not have to be encoded*/\n\t\tfor (i = info->palettesize; i != 0; --i)\n\t\t{\n\t\t\tif (info->palette[4 * (i - 1) + 3] == 255) --amount;\n\t\t\telse break;\n\t\t}\n\t\t/*add only alpha channel*/\n\t\tfor (i = 0; i != amount; ++i) ucvector_push_back(&tRNS, info->palette[4 * i + 3]);\n\t}\n\telse if (info->colortype == LCT_GREY)\n\t{\n\t\tif (info->key_defined)\n\t\t{\n\t\t\tucvector_push_back(&tRNS, (unsigned char)(info->key_r >> 8));\n\t\t\tucvector_push_back(&tRNS, (unsigned char)(info->key_r & 255));\n\t\t}\n\t}\n\telse if (info->colortype == LCT_RGB)\n\t{\n\t\tif (info->key_defined)\n\t\t{\n\t\t\tucvector_push_back(&tRNS, (unsigned char)(info->key_r >> 8));\n\t\t\tucvector_push_back(&tRNS, (unsigned char)(info->key_r & 255));\n\t\t\tucvector_push_back(&tRNS, (unsigned char)(info->key_g >> 8));\n\t\t\tucvector_push_back(&tRNS, (unsigned char)(info->key_g & 255));\n\t\t\tucvector_push_back(&tRNS, (unsigned char)(info->key_b >> 8));\n\t\t\tucvector_push_back(&tRNS, (unsigned char)(info->key_b & 255));\n\t\t}\n\t}\n\n\terror = addChunk(out, \"tRNS\", tRNS.data, tRNS.size);\n\tucvector_cleanup(&tRNS);\n\n\treturn error;\n}\n\nstatic unsigned addChunk_IDAT(ucvector* out, const unsigned char* data, size_t datasize,\n\tLodePNGCompressSettings* zlibsettings)\n{\n\tucvector zlibdata;\n\tunsigned error = 0;\n\n\t/*compress with the Zlib compressor*/\n\tucvector_init(&zlibdata);\n\terror = zlib_compress(&zlibdata.data, &zlibdata.size, data, datasize, zlibsettings);\n\tif (!error) error = addChunk(out, \"IDAT\", zlibdata.data, zlibdata.size);\n\tucvector_cleanup(&zlibdata);\n\n\treturn error;\n}\n\nstatic unsigned addChunk_IEND(ucvector* out)\n{\n\tunsigned error = 0;\n\terror = addChunk(out, \"IEND\", 0, 0);\n\treturn error;\n}\n\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\nstatic unsigned addChunk_tEXt(ucvector* out, const char* keyword, const char* textstring)\n{\n\tunsigned error = 0;\n\tsize_t i;\n\tucvector text;\n\tucvector_init(&text);\n\tfor (i = 0; keyword[i] != 0; ++i) ucvector_push_back(&text, (unsigned char)keyword[i]);\n\tif (i < 1 || i > 79) return 89; /*error: invalid keyword size*/\n\tucvector_push_back(&text, 0); /*0 termination char*/\n\tfor (i = 0; textstring[i] != 0; ++i) ucvector_push_back(&text, (unsigned char)textstring[i]);\n\terror = addChunk(out, \"tEXt\", text.data, text.size);\n\tucvector_cleanup(&text);\n\n\treturn error;\n}\n\nstatic unsigned addChunk_zTXt(ucvector* out, const char* keyword, const char* textstring,\n\tLodePNGCompressSettings* zlibsettings)\n{\n\tunsigned error = 0;\n\tucvector data, compressed;\n\tsize_t i, textsize = strlen(textstring);\n\n\tucvector_init(&data);\n\tucvector_init(&compressed);\n\tfor (i = 0; keyword[i] != 0; ++i) ucvector_push_back(&data, (unsigned char)keyword[i]);\n\tif (i < 1 || i > 79) return 89; /*error: invalid keyword size*/\n\tucvector_push_back(&data, 0); /*0 termination char*/\n\tucvector_push_back(&data, 0); /*compression method: 0*/\n\n\terror = zlib_compress(&compressed.data, &compressed.size,\n\t\t(unsigned char*)textstring, textsize, zlibsettings);\n\tif (!error)\n\t{\n\t\tfor (i = 0; i != compressed.size; ++i) ucvector_push_back(&data, compressed.data[i]);\n\t\terror = addChunk(out, \"zTXt\", data.data, data.size);\n\t}\n\n\tucvector_cleanup(&compressed);\n\tucvector_cleanup(&data);\n\treturn error;\n}\n\nstatic unsigned addChunk_iTXt(ucvector* out, unsigned compressed, const char* keyword, const char* langtag,\n\tconst char* transkey, const char* textstring, LodePNGCompressSettings* zlibsettings)\n{\n\tunsigned error = 0;\n\tucvector data;\n\tsize_t i, textsize = strlen(textstring);\n\n\tucvector_init(&data);\n\n\tfor (i = 0; keyword[i] != 0; ++i) ucvector_push_back(&data, (unsigned char)keyword[i]);\n\tif (i < 1 || i > 79) return 89; /*error: invalid keyword size*/\n\tucvector_push_back(&data, 0); /*null termination char*/\n\tucvector_push_back(&data, compressed ? 1 : 0); /*compression flag*/\n\tucvector_push_back(&data, 0); /*compression method*/\n\tfor (i = 0; langtag[i] != 0; ++i) ucvector_push_back(&data, (unsigned char)langtag[i]);\n\tucvector_push_back(&data, 0); /*null termination char*/\n\tfor (i = 0; transkey[i] != 0; ++i) ucvector_push_back(&data, (unsigned char)transkey[i]);\n\tucvector_push_back(&data, 0); /*null termination char*/\n\n\tif (compressed)\n\t{\n\t\tucvector compressed_data;\n\t\tucvector_init(&compressed_data);\n\t\terror = zlib_compress(&compressed_data.data, &compressed_data.size,\n\t\t\t(unsigned char*)textstring, textsize, zlibsettings);\n\t\tif (!error)\n\t\t{\n\t\t\tfor (i = 0; i != compressed_data.size; ++i) ucvector_push_back(&data, compressed_data.data[i]);\n\t\t}\n\t\tucvector_cleanup(&compressed_data);\n\t}\n\telse /*not compressed*/\n\t{\n\t\tfor (i = 0; textstring[i] != 0; ++i) ucvector_push_back(&data, (unsigned char)textstring[i]);\n\t}\n\n\tif (!error) error = addChunk(out, \"iTXt\", data.data, data.size);\n\tucvector_cleanup(&data);\n\treturn error;\n}\n\nstatic unsigned addChunk_bKGD(ucvector* out, const LodePNGInfo* info)\n{\n\tunsigned error = 0;\n\tucvector bKGD;\n\tucvector_init(&bKGD);\n\tif (info->color.colortype == LCT_GREY || info->color.colortype == LCT_GREY_ALPHA)\n\t{\n\t\tucvector_push_back(&bKGD, (unsigned char)(info->background_r >> 8));\n\t\tucvector_push_back(&bKGD, (unsigned char)(info->background_r & 255));\n\t}\n\telse if (info->color.colortype == LCT_RGB || info->color.colortype == LCT_RGBA)\n\t{\n\t\tucvector_push_back(&bKGD, (unsigned char)(info->background_r >> 8));\n\t\tucvector_push_back(&bKGD, (unsigned char)(info->background_r & 255));\n\t\tucvector_push_back(&bKGD, (unsigned char)(info->background_g >> 8));\n\t\tucvector_push_back(&bKGD, (unsigned char)(info->background_g & 255));\n\t\tucvector_push_back(&bKGD, (unsigned char)(info->background_b >> 8));\n\t\tucvector_push_back(&bKGD, (unsigned char)(info->background_b & 255));\n\t}\n\telse if (info->color.colortype == LCT_PALETTE)\n\t{\n\t\tucvector_push_back(&bKGD, (unsigned char)(info->background_r & 255)); /*palette index*/\n\t}\n\n\terror = addChunk(out, \"bKGD\", bKGD.data, bKGD.size);\n\tucvector_cleanup(&bKGD);\n\n\treturn error;\n}\n\nstatic unsigned addChunk_tIME(ucvector* out, const LodePNGTime* time)\n{\n\tunsigned error = 0;\n\tunsigned char* data = (unsigned char*)lodepng_malloc(7);\n\tif (!data) return 83; /*alloc fail*/\n\tdata[0] = (unsigned char)(time->year >> 8);\n\tdata[1] = (unsigned char)(time->year & 255);\n\tdata[2] = (unsigned char)time->month;\n\tdata[3] = (unsigned char)time->day;\n\tdata[4] = (unsigned char)time->hour;\n\tdata[5] = (unsigned char)time->minute;\n\tdata[6] = (unsigned char)time->second;\n\terror = addChunk(out, \"tIME\", data, 7);\n\tlodepng_free(data);\n\treturn error;\n}\n\nstatic unsigned addChunk_pHYs(ucvector* out, const LodePNGInfo* info)\n{\n\tunsigned error = 0;\n\tucvector data;\n\tucvector_init(&data);\n\n\tlodepng_add32bitInt(&data, info->phys_x);\n\tlodepng_add32bitInt(&data, info->phys_y);\n\tucvector_push_back(&data, info->phys_unit);\n\n\terror = addChunk(out, \"pHYs\", data.data, data.size);\n\tucvector_cleanup(&data);\n\n\treturn error;\n}\n\nstatic unsigned addChunk_gAMA(ucvector* out, const LodePNGInfo* info)\n{\n\tunsigned error = 0;\n\tucvector data;\n\tucvector_init(&data);\n\n\tlodepng_add32bitInt(&data, info->gama_gamma);\n\n\terror = addChunk(out, \"gAMA\", data.data, data.size);\n\tucvector_cleanup(&data);\n\n\treturn error;\n}\n\nstatic unsigned addChunk_cHRM(ucvector* out, const LodePNGInfo* info)\n{\n\tunsigned error = 0;\n\tucvector data;\n\tucvector_init(&data);\n\n\tlodepng_add32bitInt(&data, info->chrm_white_x);\n\tlodepng_add32bitInt(&data, info->chrm_white_y);\n\tlodepng_add32bitInt(&data, info->chrm_red_x);\n\tlodepng_add32bitInt(&data, info->chrm_red_y);\n\tlodepng_add32bitInt(&data, info->chrm_green_x);\n\tlodepng_add32bitInt(&data, info->chrm_green_y);\n\tlodepng_add32bitInt(&data, info->chrm_blue_x);\n\tlodepng_add32bitInt(&data, info->chrm_blue_y);\n\n\terror = addChunk(out, \"cHRM\", data.data, data.size);\n\tucvector_cleanup(&data);\n\n\treturn error;\n}\n\nstatic unsigned addChunk_sRGB(ucvector* out, const LodePNGInfo* info)\n{\n\tunsigned char data = info->srgb_intent;\n\treturn addChunk(out, \"sRGB\", &data, 1);\n}\n\nstatic unsigned addChunk_iCCP(ucvector* out, const LodePNGInfo* info, LodePNGCompressSettings* zlibsettings)\n{\n\tunsigned error = 0;\n\tucvector data, compressed;\n\tsize_t i;\n\n\tucvector_init(&data);\n\tucvector_init(&compressed);\n\tfor (i = 0; info->iccp_name[i] != 0; ++i) ucvector_push_back(&data, (unsigned char)info->iccp_name[i]);\n\tif (i < 1 || i > 79) return 89; /*error: invalid keyword size*/\n\tucvector_push_back(&data, 0); /*0 termination char*/\n\tucvector_push_back(&data, 0); /*compression method: 0*/\n\n\terror = zlib_compress(&compressed.data, &compressed.size,\n\t\tinfo->iccp_profile, info->iccp_profile_size, zlibsettings);\n\tif (!error)\n\t{\n\t\tfor (i = 0; i != compressed.size; ++i) ucvector_push_back(&data, compressed.data[i]);\n\t\terror = addChunk(out, \"iCCP\", data.data, data.size);\n\t}\n\n\tucvector_cleanup(&compressed);\n\tucvector_cleanup(&data);\n\treturn error;\n}\n\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\nstatic void filterScanline(unsigned char* out, const unsigned char* scanline, const unsigned char* prevline,\n\tsize_t length, size_t bytewidth, unsigned char filterType)\n{\n\tsize_t i;\n\tswitch (filterType)\n\t{\n\tcase 0: /*None*/\n\t\tfor (i = 0; i != length; ++i) out[i] = scanline[i];\n\t\tbreak;\n\tcase 1: /*Sub*/\n\t\tfor (i = 0; i != bytewidth; ++i) out[i] = scanline[i];\n\t\tfor (i = bytewidth; i < length; ++i) out[i] = scanline[i] - scanline[i - bytewidth];\n\t\tbreak;\n\tcase 2: /*Up*/\n\t\tif (prevline)\n\t\t{\n\t\t\tfor (i = 0; i != length; ++i) out[i] = scanline[i] - prevline[i];\n\t\t}\n\t\telse\n\t\t{\n\t\t\tfor (i = 0; i != length; ++i) out[i] = scanline[i];\n\t\t}\n\t\tbreak;\n\tcase 3: /*Average*/\n\t\tif (prevline)\n\t\t{\n\t\t\tfor (i = 0; i != bytewidth; ++i) out[i] = scanline[i] - (prevline[i] >> 1);\n\t\t\tfor (i = bytewidth; i < length; ++i) out[i] = scanline[i] - ((scanline[i - bytewidth] + prevline[i]) >> 1);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tfor (i = 0; i != bytewidth; ++i) out[i] = scanline[i];\n\t\t\tfor (i = bytewidth; i < length; ++i) out[i] = scanline[i] - (scanline[i - bytewidth] >> 1);\n\t\t}\n\t\tbreak;\n\tcase 4: /*Paeth*/\n\t\tif (prevline)\n\t\t{\n\t\t\t/*paethPredictor(0, prevline[i], 0) is always prevline[i]*/\n\t\t\tfor (i = 0; i != bytewidth; ++i) out[i] = (scanline[i] - prevline[i]);\n\t\t\tfor (i = bytewidth; i < length; ++i)\n\t\t\t{\n\t\t\t\tout[i] = (scanline[i] - paethPredictor(scanline[i - bytewidth], prevline[i], prevline[i - bytewidth]));\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tfor (i = 0; i != bytewidth; ++i) out[i] = scanline[i];\n\t\t\t/*paethPredictor(scanline[i - bytewidth], 0, 0) is always scanline[i - bytewidth]*/\n\t\t\tfor (i = bytewidth; i < length; ++i) out[i] = (scanline[i] - scanline[i - bytewidth]);\n\t\t}\n\t\tbreak;\n\tdefault: return; /*unexisting filter type given*/\n\t}\n}\n\n/* log2 approximation. A slight bit faster than std::log. */\nstatic float flog2(float f)\n{\n\tfloat result = 0;\n\twhile (f > 32) { result += 4; f /= 16; }\n\twhile (f > 2) { ++result; f /= 2; }\n\treturn result + 1.442695f * (f * f * f / 3 - 3 * f * f / 2 + 3 * f - 1.83333f);\n}\n\nstatic unsigned filter(unsigned char* out, const unsigned char* in, unsigned w, unsigned h,\n\tconst LodePNGColorMode* info, const LodePNGEncoderSettings* settings)\n{\n\t/*\n\tFor PNG filter method 0\n\tout must be a buffer with as size: h + (w * h * bpp + 7) / 8, because there are\n\tthe scanlines with 1 extra byte per scanline\n\t*/\n\n\tunsigned bpp = lodepng_get_bpp(info);\n\t/*the width of a scanline in bytes, not including the filter type*/\n\tsize_t linebytes = (w * bpp + 7) / 8;\n\t/*bytewidth is used for filtering, is 1 when bpp < 8, number of bytes per pixel otherwise*/\n\tsize_t bytewidth = (bpp + 7) / 8;\n\tconst unsigned char* prevline = 0;\n\tunsigned x, y;\n\tunsigned error = 0;\n\tLodePNGFilterStrategy strategy = settings->filter_strategy;\n\n\t/*\n\tThere is a heuristic called the minimum sum of absolute differences heuristic, suggested by the PNG standard:\n\t*  If the image type is Palette, or the bit depth is smaller than 8, then do not filter the image (i.e.\n\tuse fixed filtering, with the filter None).\n\t* (The other case) If the image type is Grayscale or RGB (with or without Alpha), and the bit depth is\n\tnot smaller than 8, then use adaptive filtering heuristic as follows: independently for each row, apply\n\tall five filters and select the filter that produces the smallest sum of absolute values per row.\n\tThis heuristic is used if filter strategy is LFS_MINSUM and filter_palette_zero is true.\n\n\tIf filter_palette_zero is true and filter_strategy is not LFS_MINSUM, the above heuristic is followed,\n\tbut for \"the other case\", whatever strategy filter_strategy is set to instead of the minimum sum\n\theuristic is used.\n\t*/\n\tif (settings->filter_palette_zero &&\n\t\t(info->colortype == LCT_PALETTE || info->bitdepth < 8)) strategy = LFS_ZERO;\n\n\tif (bpp == 0) return 31; /*error: invalid color type*/\n\n\tif (strategy == LFS_ZERO)\n\t{\n\t\tfor (y = 0; y != h; ++y)\n\t\t{\n\t\t\tsize_t outindex = (1 + linebytes) * y; /*the extra filterbyte added to each row*/\n\t\t\tsize_t inindex = linebytes * y;\n\t\t\tout[outindex] = 0; /*filter type byte*/\n\t\t\tfilterScanline(&out[outindex + 1], &in[inindex], prevline, linebytes, bytewidth, 0);\n\t\t\tprevline = &in[inindex];\n\t\t}\n\t}\n\telse if (strategy == LFS_MINSUM)\n\t{\n\t\t/*adaptive filtering*/\n\t\tsize_t sum[5];\n\t\tunsigned char* attempt[5]; /*five filtering attempts, one for each filter type*/\n\t\tsize_t smallest = 0;\n\t\tunsigned char type, bestType = 0;\n\n\t\tfor (type = 0; type != 5; ++type)\n\t\t{\n\t\t\tattempt[type] = (unsigned char*)lodepng_malloc(linebytes);\n\t\t\tif (!attempt[type]) return 83; /*alloc fail*/\n\t\t}\n\n\t\tif (!error)\n\t\t{\n\t\t\tfor (y = 0; y != h; ++y)\n\t\t\t{\n\t\t\t\t/*try the 5 filter types*/\n\t\t\t\tfor (type = 0; type != 5; ++type)\n\t\t\t\t{\n\t\t\t\t\tfilterScanline(attempt[type], &in[y * linebytes], prevline, linebytes, bytewidth, type);\n\n\t\t\t\t\t/*calculate the sum of the result*/\n\t\t\t\t\tsum[type] = 0;\n\t\t\t\t\tif (type == 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tfor (x = 0; x != linebytes; ++x) sum[type] += (unsigned char)(attempt[type][x]);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tfor (x = 0; x != linebytes; ++x)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t/*For differences, each byte should be treated as signed, values above 127 are negative\n\t\t\t\t\t\t\t(converted to signed char). Filtertype 0 isn't a difference though, so use unsigned there.\n\t\t\t\t\t\t\tThis means filtertype 0 is almost never chosen, but that is justified.*/\n\t\t\t\t\t\t\tunsigned char s = attempt[type][x];\n\t\t\t\t\t\t\tsum[type] += s < 128 ? s : (255U - s);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\t/*check if this is smallest sum (or if type == 0 it's the first case so always store the values)*/\n\t\t\t\t\tif (type == 0 || sum[type] < smallest)\n\t\t\t\t\t{\n\t\t\t\t\t\tbestType = type;\n\t\t\t\t\t\tsmallest = sum[type];\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tprevline = &in[y * linebytes];\n\n\t\t\t\t/*now fill the out values*/\n\t\t\t\tout[y * (linebytes + 1)] = bestType; /*the first byte of a scanline will be the filter type*/\n\t\t\t\tfor (x = 0; x != linebytes; ++x) out[y * (linebytes + 1) + 1 + x] = attempt[bestType][x];\n\t\t\t}\n\t\t}\n\n\t\tfor (type = 0; type != 5; ++type) lodepng_free(attempt[type]);\n\t}\n\telse if (strategy == LFS_ENTROPY)\n\t{\n\t\tfloat sum[5];\n\t\tunsigned char* attempt[5]; /*five filtering attempts, one for each filter type*/\n\t\tfloat smallest = 0;\n\t\tunsigned type, bestType = 0;\n\t\tunsigned count[256];\n\n\t\tfor (type = 0; type != 5; ++type)\n\t\t{\n\t\t\tattempt[type] = (unsigned char*)lodepng_malloc(linebytes);\n\t\t\tif (!attempt[type]) return 83; /*alloc fail*/\n\t\t}\n\n\t\tfor (y = 0; y != h; ++y)\n\t\t{\n\t\t\t/*try the 5 filter types*/\n\t\t\tfor (type = 0; type != 5; ++type)\n\t\t\t{\n\t\t\t\tfilterScanline(attempt[type], &in[y * linebytes], prevline, linebytes, bytewidth, type);\n\t\t\t\tfor (x = 0; x != 256; ++x) count[x] = 0;\n\t\t\t\tfor (x = 0; x != linebytes; ++x) ++count[attempt[type][x]];\n\t\t\t\t++count[type]; /*the filter type itself is part of the scanline*/\n\t\t\t\tsum[type] = 0;\n\t\t\t\tfor (x = 0; x != 256; ++x)\n\t\t\t\t{\n\t\t\t\t\tfloat p = count[x] / (float)(linebytes + 1);\n\t\t\t\t\tsum[type] += count[x] == 0 ? 0 : flog2(1 / p) * p;\n\t\t\t\t}\n\t\t\t\t/*check if this is smallest sum (or if type == 0 it's the first case so always store the values)*/\n\t\t\t\tif (type == 0 || sum[type] < smallest)\n\t\t\t\t{\n\t\t\t\t\tbestType = type;\n\t\t\t\t\tsmallest = sum[type];\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tprevline = &in[y * linebytes];\n\n\t\t\t/*now fill the out values*/\n\t\t\tout[y * (linebytes + 1)] = bestType; /*the first byte of a scanline will be the filter type*/\n\t\t\tfor (x = 0; x != linebytes; ++x) out[y * (linebytes + 1) + 1 + x] = attempt[bestType][x];\n\t\t}\n\n\t\tfor (type = 0; type != 5; ++type) lodepng_free(attempt[type]);\n\t}\n\telse if (strategy == LFS_PREDEFINED)\n\t{\n\t\tfor (y = 0; y != h; ++y)\n\t\t{\n\t\t\tsize_t outindex = (1 + linebytes) * y; /*the extra filterbyte added to each row*/\n\t\t\tsize_t inindex = linebytes * y;\n\t\t\tunsigned char type = settings->predefined_filters[y];\n\t\t\tout[outindex] = type; /*filter type byte*/\n\t\t\tfilterScanline(&out[outindex + 1], &in[inindex], prevline, linebytes, bytewidth, type);\n\t\t\tprevline = &in[inindex];\n\t\t}\n\t}\n\telse if (strategy == LFS_BRUTE_FORCE)\n\t{\n\t\t/*brute force filter chooser.\n\t\tdeflate the scanline after every filter attempt to see which one deflates best.\n\t\tThis is very slow and gives only slightly smaller, sometimes even larger, result*/\n\t\tsize_t size[5];\n\t\tunsigned char* attempt[5]; /*five filtering attempts, one for each filter type*/\n\t\tsize_t smallest = 0;\n\t\tunsigned type = 0, bestType = 0;\n\t\tunsigned char* dummy;\n\t\tLodePNGCompressSettings zlibsettings = settings->zlibsettings;\n\t\t/*use fixed tree on the attempts so that the tree is not adapted to the filtertype on purpose,\n\t\tto simulate the true case where the tree is the same for the whole image. Sometimes it gives\n\t\tbetter result with dynamic tree anyway. Using the fixed tree sometimes gives worse, but in rare\n\t\tcases better compression. It does make this a bit less slow, so it's worth doing this.*/\n\t\tzlibsettings.btype = 1;\n\t\t/*a custom encoder likely doesn't read the btype setting and is optimized for complete PNG\n\t\timages only, so disable it*/\n\t\tzlibsettings.custom_zlib = 0;\n\t\tzlibsettings.custom_deflate = 0;\n\t\tfor (type = 0; type != 5; ++type)\n\t\t{\n\t\t\tattempt[type] = (unsigned char*)lodepng_malloc(linebytes);\n\t\t\tif (!attempt[type]) return 83; /*alloc fail*/\n\t\t}\n\t\tfor (y = 0; y != h; ++y) /*try the 5 filter types*/\n\t\t{\n\t\t\tfor (type = 0; type != 5; ++type)\n\t\t\t{\n\t\t\t\tunsigned testsize = (unsigned)linebytes;\n\t\t\t\t/*if(testsize > 8) testsize /= 8;*/ /*it already works good enough by testing a part of the row*/\n\n\t\t\t\tfilterScanline(attempt[type], &in[y * linebytes], prevline, linebytes, bytewidth, type);\n\t\t\t\tsize[type] = 0;\n\t\t\t\tdummy = 0;\n\t\t\t\tzlib_compress(&dummy, &size[type], attempt[type], testsize, &zlibsettings);\n\t\t\t\tlodepng_free(dummy);\n\t\t\t\t/*check if this is smallest size (or if type == 0 it's the first case so always store the values)*/\n\t\t\t\tif (type == 0 || size[type] < smallest)\n\t\t\t\t{\n\t\t\t\t\tbestType = type;\n\t\t\t\t\tsmallest = size[type];\n\t\t\t\t}\n\t\t\t}\n\t\t\tprevline = &in[y * linebytes];\n\t\t\tout[y * (linebytes + 1)] = bestType; /*the first byte of a scanline will be the filter type*/\n\t\t\tfor (x = 0; x != linebytes; ++x) out[y * (linebytes + 1) + 1 + x] = attempt[bestType][x];\n\t\t}\n\t\tfor (type = 0; type != 5; ++type) lodepng_free(attempt[type]);\n\t}\n\telse return 88; /* unknown filter strategy */\n\n\treturn error;\n}\n\nstatic void addPaddingBits(unsigned char* out, const unsigned char* in,\n\tsize_t olinebits, size_t ilinebits, unsigned h)\n{\n\t/*The opposite of the removePaddingBits function\n\tolinebits must be >= ilinebits*/\n\tunsigned y;\n\tsize_t diff = olinebits - ilinebits;\n\tsize_t obp = 0, ibp = 0; /*bit pointers*/\n\tfor (y = 0; y != h; ++y)\n\t{\n\t\tsize_t x;\n\t\tfor (x = 0; x < ilinebits; ++x)\n\t\t{\n\t\t\tunsigned char bit = readBitFromReversedStream(&ibp, in);\n\t\t\tsetBitOfReversedStream(&obp, out, bit);\n\t\t}\n\t\t/*obp += diff; --> no, fill in some value in the padding bits too, to avoid\n\t\t\"Use of uninitialised value of size ###\" warning from valgrind*/\n\t\tfor (x = 0; x != diff; ++x) setBitOfReversedStream(&obp, out, 0);\n\t}\n}\n\n/*\nin: non-interlaced image with size w*h\nout: the same pixels, but re-ordered according to PNG's Adam7 interlacing, with\nno padding bits between scanlines, but between reduced images so that each\nreduced image starts at a byte.\nbpp: bits per pixel\nthere are no padding bits, not between scanlines, not between reduced images\nin has the following size in bits: w * h * bpp.\nout is possibly bigger due to padding bits between reduced images\nNOTE: comments about padding bits are only relevant if bpp < 8\n*/\nstatic void Adam7_interlace(unsigned char* out, const unsigned char* in, unsigned w, unsigned h, unsigned bpp)\n{\n\tunsigned passw[7], passh[7];\n\tsize_t filter_passstart[8], padded_passstart[8], passstart[8];\n\tunsigned i;\n\n\tAdam7_getpassvalues(passw, passh, filter_passstart, padded_passstart, passstart, w, h, bpp);\n\n\tif (bpp >= 8)\n\t{\n\t\tfor (i = 0; i != 7; ++i)\n\t\t{\n\t\t\tunsigned x, y, b;\n\t\t\tsize_t bytewidth = bpp / 8;\n\t\t\tfor (y = 0; y < passh[i]; ++y)\n\t\t\t\tfor (x = 0; x < passw[i]; ++x)\n\t\t\t\t{\n\t\t\t\t\tsize_t pixelinstart = ((ADAM7_IY[i] + y * ADAM7_DY[i]) * w + ADAM7_IX[i] + x * ADAM7_DX[i]) * bytewidth;\n\t\t\t\t\tsize_t pixeloutstart = passstart[i] + (y * passw[i] + x) * bytewidth;\n\t\t\t\t\tfor (b = 0; b < bytewidth; ++b)\n\t\t\t\t\t{\n\t\t\t\t\t\tout[pixeloutstart + b] = in[pixelinstart + b];\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t}\n\t}\n\telse /*bpp < 8: Adam7 with pixels < 8 bit is a bit trickier: with bit pointers*/\n\t{\n\t\tfor (i = 0; i != 7; ++i)\n\t\t{\n\t\t\tunsigned x, y, b;\n\t\t\tunsigned ilinebits = bpp * passw[i];\n\t\t\tunsigned olinebits = bpp * w;\n\t\t\tsize_t obp, ibp; /*bit pointers (for out and in buffer)*/\n\t\t\tfor (y = 0; y < passh[i]; ++y)\n\t\t\t\tfor (x = 0; x < passw[i]; ++x)\n\t\t\t\t{\n\t\t\t\t\tibp = (ADAM7_IY[i] + y * ADAM7_DY[i]) * olinebits + (ADAM7_IX[i] + x * ADAM7_DX[i]) * bpp;\n\t\t\t\t\tobp = (8 * passstart[i]) + (y * ilinebits + x * bpp);\n\t\t\t\t\tfor (b = 0; b < bpp; ++b)\n\t\t\t\t\t{\n\t\t\t\t\t\tunsigned char bit = readBitFromReversedStream(&ibp, in);\n\t\t\t\t\t\tsetBitOfReversedStream(&obp, out, bit);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t}\n\t}\n}\n\n/*out must be buffer big enough to contain uncompressed IDAT chunk data, and in must contain the full image.\nreturn value is error**/\nstatic unsigned preProcessScanlines(unsigned char** out, size_t* outsize, const unsigned char* in,\n\tunsigned w, unsigned h,\n\tconst LodePNGInfo* info_png, const LodePNGEncoderSettings* settings)\n{\n\t/*\n\tThis function converts the pure 2D image with the PNG's colortype, into filtered-padded-interlaced data. Steps:\n\t*) if no Adam7: 1) add padding bits (= posible extra bits per scanline if bpp < 8) 2) filter\n\t*) if adam7: 1) Adam7_interlace 2) 7x add padding bits 3) 7x filter\n\t*/\n\tunsigned bpp = lodepng_get_bpp(&info_png->color);\n\tunsigned error = 0;\n\n\tif (info_png->interlace_method == 0)\n\t{\n\t\t*outsize = h + (h * ((w * bpp + 7) / 8)); /*image size plus an extra byte per scanline + possible padding bits*/\n\t\t*out = (unsigned char*)lodepng_malloc(*outsize);\n\t\tif (!(*out) && (*outsize)) error = 83; /*alloc fail*/\n\n\t\tif (!error)\n\t\t{\n\t\t\t/*non multiple of 8 bits per scanline, padding bits needed per scanline*/\n\t\t\tif (bpp < 8 && w * bpp != ((w * bpp + 7) / 8) * 8)\n\t\t\t{\n\t\t\t\tunsigned char* padded = (unsigned char*)lodepng_malloc(h * ((w * bpp + 7) / 8));\n\t\t\t\tif (!padded) error = 83; /*alloc fail*/\n\t\t\t\tif (!error)\n\t\t\t\t{\n\t\t\t\t\taddPaddingBits(padded, in, ((w * bpp + 7) / 8) * 8, w * bpp, h);\n\t\t\t\t\terror = filter(*out, padded, w, h, &info_png->color, settings);\n\t\t\t\t}\n\t\t\t\tlodepng_free(padded);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t/*we can immediately filter into the out buffer, no other steps needed*/\n\t\t\t\terror = filter(*out, in, w, h, &info_png->color, settings);\n\t\t\t}\n\t\t}\n\t}\n\telse /*interlace_method is 1 (Adam7)*/\n\t{\n\t\tunsigned passw[7], passh[7];\n\t\tsize_t filter_passstart[8], padded_passstart[8], passstart[8];\n\t\tunsigned char* adam7;\n\n\t\tAdam7_getpassvalues(passw, passh, filter_passstart, padded_passstart, passstart, w, h, bpp);\n\n\t\t*outsize = filter_passstart[7]; /*image size plus an extra byte per scanline + possible padding bits*/\n\t\t*out = (unsigned char*)lodepng_malloc(*outsize);\n\t\tif (!(*out)) error = 83; /*alloc fail*/\n\n\t\tadam7 = (unsigned char*)lodepng_malloc(passstart[7]);\n\t\tif (!adam7 && passstart[7]) error = 83; /*alloc fail*/\n\n\t\tif (!error)\n\t\t{\n\t\t\tunsigned i;\n\n\t\t\tAdam7_interlace(adam7, in, w, h, bpp);\n\t\t\tfor (i = 0; i != 7; ++i)\n\t\t\t{\n\t\t\t\tif (bpp < 8)\n\t\t\t\t{\n\t\t\t\t\tunsigned char* padded = (unsigned char*)lodepng_malloc(padded_passstart[i + 1] - padded_passstart[i]);\n\t\t\t\t\tif (!padded) ERROR_BREAK(83); /*alloc fail*/\n\t\t\t\t\taddPaddingBits(padded, &adam7[passstart[i]],\n\t\t\t\t\t\t((passw[i] * bpp + 7) / 8) * 8, passw[i] * bpp, passh[i]);\n\t\t\t\t\terror = filter(&(*out)[filter_passstart[i]], padded,\n\t\t\t\t\t\tpassw[i], passh[i], &info_png->color, settings);\n\t\t\t\t\tlodepng_free(padded);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\terror = filter(&(*out)[filter_passstart[i]], &adam7[padded_passstart[i]],\n\t\t\t\t\t\tpassw[i], passh[i], &info_png->color, settings);\n\t\t\t\t}\n\n\t\t\t\tif (error) break;\n\t\t\t}\n\t\t}\n\n\t\tlodepng_free(adam7);\n\t}\n\n\treturn error;\n}\n\n/*\npalette must have 4 * palettesize bytes allocated, and given in format RGBARGBARGBARGBA...\nreturns 0 if the palette is opaque,\nreturns 1 if the palette has a single color with alpha 0 ==> color key\nreturns 2 if the palette is semi-translucent.\n*/\nstatic unsigned getPaletteTranslucency(const unsigned char* palette, size_t palettesize)\n{\n\tsize_t i;\n\tunsigned key = 0;\n\tunsigned r = 0, g = 0, b = 0; /*the value of the color with alpha 0, so long as color keying is possible*/\n\tfor (i = 0; i != palettesize; ++i)\n\t{\n\t\tif (!key && palette[4 * i + 3] == 0)\n\t\t{\n\t\t\tr = palette[4 * i + 0]; g = palette[4 * i + 1]; b = palette[4 * i + 2];\n\t\t\tkey = 1;\n\t\t\ti = (size_t)(-1); /*restart from beginning, to detect earlier opaque colors with key's value*/\n\t\t}\n\t\telse if (palette[4 * i + 3] != 255) return 2;\n\t\t/*when key, no opaque RGB may have key's RGB*/\n\t\telse if (key && r == palette[i * 4 + 0] && g == palette[i * 4 + 1] && b == palette[i * 4 + 2]) return 2;\n\t}\n\treturn key;\n}\n\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\nstatic unsigned addUnknownChunks(ucvector* out, unsigned char* data, size_t datasize)\n{\n\tunsigned char* inchunk = data;\n\twhile ((size_t)(inchunk - data) < datasize)\n\t{\n\t\tCERROR_TRY_RETURN(lodepng_chunk_append(&out->data, &out->size, inchunk));\n\t\tout->allocsize = out->size; /*fix the allocsize again*/\n\t\tinchunk = lodepng_chunk_next(inchunk);\n\t}\n\treturn 0;\n}\n\nstatic unsigned isGreyICCProfile(const unsigned char* profile, unsigned size)\n{\n\t/*\n\tIt is a grey profile if bytes 16-19 are \"GRAY\", rgb profile if bytes 16-19\n\tare \"RGB \". We do not perform any full parsing of the ICC profile here, other\n\tthan check those 4 bytes to grayscale profile. Other than that, validity of\n\tthe profile is not checked. This is needed only because the PNG specification\n\trequires using a non-grey color model if there is an ICC profile with \"RGB \"\n\t(sadly limiting compression opportunities if the input data is greyscale RGB\n\tdata), and requires using a grey color model if it is \"GRAY\".\n\t*/\n\tif (size < 20) return 0;\n\treturn profile[16] == 'G' &&  profile[17] == 'R' &&  profile[18] == 'A' &&  profile[19] == 'Y';\n}\n\nstatic unsigned isRGBICCProfile(const unsigned char* profile, unsigned size)\n{\n\t/* See comment in isGreyICCProfile*/\n\tif (size < 20) return 0;\n\treturn profile[16] == 'R' &&  profile[17] == 'G' &&  profile[18] == 'B' &&  profile[19] == ' ';\n}\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\nunsigned lodepng_encode(unsigned char** out, size_t* outsize,\n\tconst unsigned char* image, unsigned w, unsigned h,\n\tLodePNGState* state)\n{\n\tunsigned char* data = 0; /*uncompressed version of the IDAT chunk data*/\n\tsize_t datasize = 0;\n\tucvector outv;\n\tLodePNGInfo info;\n\n\tucvector_init(&outv);\n\tlodepng_info_init(&info);\n\n\t/*provide some proper output values if error will happen*/\n\t*out = 0;\n\t*outsize = 0;\n\tstate->error = 0;\n\n\t/*check input values validity*/\n\tif ((state->info_png.color.colortype == LCT_PALETTE || state->encoder.force_palette)\n\t\t&& (state->info_png.color.palettesize == 0 || state->info_png.color.palettesize > 256))\n\t{\n\t\tstate->error = 68; /*invalid palette size, it is only allowed to be 1-256*/\n\t\tgoto cleanup;\n\t}\n\tif (state->encoder.zlibsettings.btype > 2)\n\t{\n\t\tstate->error = 61; /*error: unexisting btype*/\n\t\tgoto cleanup;\n\t}\n\tif (state->info_png.interlace_method > 1)\n\t{\n\t\tstate->error = 71; /*error: unexisting interlace mode*/\n\t\tgoto cleanup;\n\t}\n\tstate->error = checkColorValidity(state->info_png.color.colortype, state->info_png.color.bitdepth);\n\tif (state->error) goto cleanup; /*error: unexisting color type given*/\n\tstate->error = checkColorValidity(state->info_raw.colortype, state->info_raw.bitdepth);\n\tif (state->error) goto cleanup; /*error: unexisting color type given*/\n\n\t\t\t\t\t\t\t\t\t/* color convert and compute scanline filter types */\n\tlodepng_info_copy(&info, &state->info_png);\n\tif (state->encoder.auto_convert)\n\t{\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\t\tif (state->info_png.background_defined)\n\t\t{\n\t\t\tunsigned bg_r = state->info_png.background_r;\n\t\t\tunsigned bg_g = state->info_png.background_g;\n\t\t\tunsigned bg_b = state->info_png.background_b;\n\t\t\tunsigned r = 0, g = 0, b = 0;\n\t\t\tLodePNGColorProfile prof;\n\t\t\tLodePNGColorMode mode16 = lodepng_color_mode_make(LCT_RGB, 16);\n\t\t\tlodepng_convert_rgb(&r, &g, &b, bg_r, bg_g, bg_b, &mode16, &state->info_png.color);\n\t\t\tlodepng_color_profile_init(&prof);\n\t\t\tstate->error = lodepng_get_color_profile(&prof, image, w, h, &state->info_raw);\n\t\t\tif (state->error) goto cleanup;\n\t\t\tlodepng_color_profile_add(&prof, r, g, b, 65535);\n\t\t\tstate->error = auto_choose_color_from_profile(&info.color, &state->info_raw, &prof);\n\t\t\tif (state->error) goto cleanup;\n\t\t\tif (lodepng_convert_rgb(&info.background_r, &info.background_g, &info.background_b,\n\t\t\t\tbg_r, bg_g, bg_b, &info.color, &state->info_png.color))\n\t\t\t{\n\t\t\t\tstate->error = 104;\n\t\t\t\tgoto cleanup;\n\t\t\t}\n\t\t}\n\t\telse\n#endif /* LODEPNG_COMPILE_ANCILLARY_CHUNKS */\n\t\t{\n\t\t\tstate->error = lodepng_auto_choose_color(&info.color, image, w, h, &state->info_raw);\n\t\t\tif (state->error) goto cleanup;\n\t\t}\n\t}\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\tif (state->info_png.iccp_defined)\n\t{\n\t\tunsigned grey_icc = isGreyICCProfile(state->info_png.iccp_profile, state->info_png.iccp_profile_size);\n\t\tunsigned grey_png = info.color.colortype == LCT_GREY || info.color.colortype == LCT_GREY_ALPHA;\n\t\t/* TODO: perhaps instead of giving errors or less optimal compression, we can automatically modify\n\t\tthe ICC profile here to say \"GRAY\" or \"RGB \" to match the PNG color type, unless this will require\n\t\tnon trivial changes to the rest of the ICC profile */\n\t\tif (!grey_icc && !isRGBICCProfile(state->info_png.iccp_profile, state->info_png.iccp_profile_size))\n\t\t{\n\t\t\tstate->error = 100; /* Disallowed profile color type for PNG */\n\t\t\tgoto cleanup;\n\t\t}\n\t\tif (!state->encoder.auto_convert && grey_icc != grey_png)\n\t\t{\n\t\t\t/* Non recoverable: encoder not allowed to convert color type, and requested color type not\n\t\t\tcompatible with ICC color type */\n\t\t\tstate->error = 101;\n\t\t\tgoto cleanup;\n\t\t}\n\t\tif (grey_icc && !grey_png)\n\t\t{\n\t\t\t/* Non recoverable: trying to set greyscale ICC profile while colored pixels were given */\n\t\t\tstate->error = 102;\n\t\t\tgoto cleanup;\n\t\t\t/* NOTE: this relies on the fact that lodepng_auto_choose_color never returns palette for greyscale pixels */\n\t\t}\n\t\tif (!grey_icc && grey_png)\n\t\t{\n\t\t\t/* Recoverable but an unfortunate loss in compression density: We have greyscale pixels but\n\t\t\tare forced to store them in more expensive RGB format that will repeat each value 3 times\n\t\t\tbecause the PNG spec does not allow an RGB ICC profile with internal greyscale color data */\n\t\t\tif (info.color.colortype == LCT_GREY) info.color.colortype = LCT_RGB;\n\t\t\tif (info.color.colortype == LCT_GREY_ALPHA) info.color.colortype = LCT_RGBA;\n\t\t\tif (info.color.bitdepth < 8) info.color.bitdepth = 8;\n\t\t}\n\t}\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\tif (!lodepng_color_mode_equal(&state->info_raw, &info.color))\n\t{\n\t\tunsigned char* converted;\n\t\tsize_t size = ((size_t)w * (size_t)h * (size_t)lodepng_get_bpp(&info.color) + 7) / 8;\n\n\t\tconverted = (unsigned char*)lodepng_malloc(size);\n\t\tif (!converted && size) state->error = 83; /*alloc fail*/\n\t\tif (!state->error)\n\t\t{\n\t\t\tstate->error = lodepng_convert(converted, image, &info.color, &state->info_raw, w, h);\n\t\t}\n\t\tif (!state->error) preProcessScanlines(&data, &datasize, converted, w, h, &info, &state->encoder);\n\t\tlodepng_free(converted);\n\t\tif (state->error) goto cleanup;\n\t}\n\telse preProcessScanlines(&data, &datasize, image, w, h, &info, &state->encoder);\n\n\t/* output all PNG chunks */\n\t{\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\t\tsize_t i;\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\t\t/*write signature and chunks*/\n\t\twriteSignature(&outv);\n\t\t/*IHDR*/\n\t\taddChunk_IHDR(&outv, w, h, info.color.colortype, info.color.bitdepth, info.interlace_method);\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\t\t/*unknown chunks between IHDR and PLTE*/\n\t\tif (info.unknown_chunks_data[0])\n\t\t{\n\t\t\tstate->error = addUnknownChunks(&outv, info.unknown_chunks_data[0], info.unknown_chunks_size[0]);\n\t\t\tif (state->error) goto cleanup;\n\t\t}\n\t\t/*color profile chunks must come before PLTE */\n\t\tif (info.iccp_defined) addChunk_iCCP(&outv, &info, &state->encoder.zlibsettings);\n\t\tif (info.srgb_defined) addChunk_sRGB(&outv, &info);\n\t\tif (info.gama_defined) addChunk_gAMA(&outv, &info);\n\t\tif (info.chrm_defined) addChunk_cHRM(&outv, &info);\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\t\t/*PLTE*/\n\t\tif (info.color.colortype == LCT_PALETTE)\n\t\t{\n\t\t\taddChunk_PLTE(&outv, &info.color);\n\t\t}\n\t\tif (state->encoder.force_palette && (info.color.colortype == LCT_RGB || info.color.colortype == LCT_RGBA))\n\t\t{\n\t\t\taddChunk_PLTE(&outv, &info.color);\n\t\t}\n\t\t/*tRNS*/\n\t\tif (info.color.colortype == LCT_PALETTE && getPaletteTranslucency(info.color.palette, info.color.palettesize) != 0)\n\t\t{\n\t\t\taddChunk_tRNS(&outv, &info.color);\n\t\t}\n\t\tif ((info.color.colortype == LCT_GREY || info.color.colortype == LCT_RGB) && info.color.key_defined)\n\t\t{\n\t\t\taddChunk_tRNS(&outv, &info.color);\n\t\t}\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\t\t/*bKGD (must come between PLTE and the IDAt chunks*/\n\t\tif (info.background_defined)\n\t\t{\n\t\t\tstate->error = addChunk_bKGD(&outv, &info);\n\t\t\tif (state->error) goto cleanup;\n\t\t}\n\t\t/*pHYs (must come before the IDAT chunks)*/\n\t\tif (info.phys_defined) addChunk_pHYs(&outv, &info);\n\n\t\t/*unknown chunks between PLTE and IDAT*/\n\t\tif (info.unknown_chunks_data[1])\n\t\t{\n\t\t\tstate->error = addUnknownChunks(&outv, info.unknown_chunks_data[1], info.unknown_chunks_size[1]);\n\t\t\tif (state->error) goto cleanup;\n\t\t}\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\t\t/*IDAT (multiple IDAT chunks must be consecutive)*/\n\t\tstate->error = addChunk_IDAT(&outv, data, datasize, &state->encoder.zlibsettings);\n\t\tif (state->error) goto cleanup;\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\t\t/*tIME*/\n\t\tif (info.time_defined) addChunk_tIME(&outv, &info.time);\n\t\t/*tEXt and/or zTXt*/\n\t\tfor (i = 0; i != info.text_num; ++i)\n\t\t{\n\t\t\tif (strlen(info.text_keys[i]) > 79)\n\t\t\t{\n\t\t\t\tstate->error = 66; /*text chunk too large*/\n\t\t\t\tgoto cleanup;\n\t\t\t}\n\t\t\tif (strlen(info.text_keys[i]) < 1)\n\t\t\t{\n\t\t\t\tstate->error = 67; /*text chunk too small*/\n\t\t\t\tgoto cleanup;\n\t\t\t}\n\t\t\tif (state->encoder.text_compression)\n\t\t\t{\n\t\t\t\taddChunk_zTXt(&outv, info.text_keys[i], info.text_strings[i], &state->encoder.zlibsettings);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\taddChunk_tEXt(&outv, info.text_keys[i], info.text_strings[i]);\n\t\t\t}\n\t\t}\n\t\t/*LodePNG version id in text chunk*/\n\t\tif (state->encoder.add_id)\n\t\t{\n\t\t\tunsigned already_added_id_text = 0;\n\t\t\tfor (i = 0; i != info.text_num; ++i)\n\t\t\t{\n\t\t\t\tif (!strcmp(info.text_keys[i], \"LodePNG\"))\n\t\t\t\t{\n\t\t\t\t\talready_added_id_text = 1;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (already_added_id_text == 0)\n\t\t\t{\n\t\t\t\taddChunk_tEXt(&outv, \"LodePNG\", LODEPNG_VERSION_STRING); /*it's shorter as tEXt than as zTXt chunk*/\n\t\t\t}\n\t\t}\n\t\t/*iTXt*/\n\t\tfor (i = 0; i != info.itext_num; ++i)\n\t\t{\n\t\t\tif (strlen(info.itext_keys[i]) > 79)\n\t\t\t{\n\t\t\t\tstate->error = 66; /*text chunk too large*/\n\t\t\t\tgoto cleanup;\n\t\t\t}\n\t\t\tif (strlen(info.itext_keys[i]) < 1)\n\t\t\t{\n\t\t\t\tstate->error = 67; /*text chunk too small*/\n\t\t\t\tgoto cleanup;\n\t\t\t}\n\t\t\taddChunk_iTXt(&outv, state->encoder.text_compression,\n\t\t\t\tinfo.itext_keys[i], info.itext_langtags[i], info.itext_transkeys[i], info.itext_strings[i],\n\t\t\t\t&state->encoder.zlibsettings);\n\t\t}\n\n\t\t/*unknown chunks between IDAT and IEND*/\n\t\tif (info.unknown_chunks_data[2])\n\t\t{\n\t\t\tstate->error = addUnknownChunks(&outv, info.unknown_chunks_data[2], info.unknown_chunks_size[2]);\n\t\t\tif (state->error) goto cleanup;\n\t\t}\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n\t\taddChunk_IEND(&outv);\n\t}\n\ncleanup:\n\tlodepng_info_cleanup(&info);\n\tlodepng_free(data);\n\n\t/*instead of cleaning the vector up, give it to the output*/\n\t*out = outv.data;\n\t*outsize = outv.size;\n\n\treturn state->error;\n}\n\nunsigned lodepng_encode_memory(unsigned char** out, size_t* outsize, const unsigned char* image,\n\tunsigned w, unsigned h, LodePNGColorType colortype, unsigned bitdepth)\n{\n\tunsigned error;\n\tLodePNGState state;\n\tlodepng_state_init(&state);\n\tstate.info_raw.colortype = colortype;\n\tstate.info_raw.bitdepth = bitdepth;\n\tstate.info_png.color.colortype = colortype;\n\tstate.info_png.color.bitdepth = bitdepth;\n\tlodepng_encode(out, outsize, image, w, h, &state);\n\terror = state.error;\n\tlodepng_state_cleanup(&state);\n\treturn error;\n}\n\nunsigned lodepng_encode32(unsigned char** out, size_t* outsize, const unsigned char* image, unsigned w, unsigned h)\n{\n\treturn lodepng_encode_memory(out, outsize, image, w, h, LCT_RGBA, 8);\n}\n\nunsigned lodepng_encode24(unsigned char** out, size_t* outsize, const unsigned char* image, unsigned w, unsigned h)\n{\n\treturn lodepng_encode_memory(out, outsize, image, w, h, LCT_RGB, 8);\n}\n\n#ifdef LODEPNG_COMPILE_DISK\nunsigned lodepng_encode_file(const char* filename, const unsigned char* image, unsigned w, unsigned h,\n\tLodePNGColorType colortype, unsigned bitdepth)\n{\n\tunsigned char* buffer;\n\tsize_t buffersize;\n\tunsigned error = lodepng_encode_memory(&buffer, &buffersize, image, w, h, colortype, bitdepth);\n\tif (!error) error = lodepng_save_file(buffer, buffersize, filename);\n\tlodepng_free(buffer);\n\treturn error;\n}\n\nunsigned lodepng_encode32_file(const char* filename, const unsigned char* image, unsigned w, unsigned h)\n{\n\treturn lodepng_encode_file(filename, image, w, h, LCT_RGBA, 8);\n}\n\nunsigned lodepng_encode24_file(const char* filename, const unsigned char* image, unsigned w, unsigned h)\n{\n\treturn lodepng_encode_file(filename, image, w, h, LCT_RGB, 8);\n}\n#endif /*LODEPNG_COMPILE_DISK*/\n\nvoid lodepng_encoder_settings_init(LodePNGEncoderSettings* settings)\n{\n\tlodepng_compress_settings_init(&settings->zlibsettings);\n\tsettings->filter_palette_zero = 1;\n\tsettings->filter_strategy = LFS_MINSUM;\n\tsettings->auto_convert = 1;\n\tsettings->force_palette = 0;\n\tsettings->predefined_filters = 0;\n#ifdef LODEPNG_COMPILE_ANCILLARY_CHUNKS\n\tsettings->add_id = 0;\n\tsettings->text_compression = 1;\n#endif /*LODEPNG_COMPILE_ANCILLARY_CHUNKS*/\n}\n\n#endif /*LODEPNG_COMPILE_ENCODER*/\n#endif /*LODEPNG_COMPILE_PNG*/\n\n#ifdef LODEPNG_COMPILE_ERROR_TEXT\n/*\nThis returns the description of a numerical error code in English. This is also\nthe documentation of all the error codes.\n*/\nconst char* lodepng_error_text(unsigned code)\n{\n\tswitch (code)\n\t{\n\tcase 0: return \"no error, everything went ok\";\n\tcase 1: return \"nothing done yet\"; /*the Encoder/Decoder has done nothing yet, error checking makes no sense yet*/\n\tcase 10: return \"end of input memory reached without huffman end code\"; /*while huffman decoding*/\n\tcase 11: return \"error in code tree made it jump outside of huffman tree\"; /*while huffman decoding*/\n\tcase 13: return \"problem while processing dynamic deflate block\";\n\tcase 14: return \"problem while processing dynamic deflate block\";\n\tcase 15: return \"problem while processing dynamic deflate block\";\n\tcase 16: return \"unexisting code while processing dynamic deflate block\";\n\tcase 17: return \"end of out buffer memory reached while inflating\";\n\tcase 18: return \"invalid distance code while inflating\";\n\tcase 19: return \"end of out buffer memory reached while inflating\";\n\tcase 20: return \"invalid deflate block BTYPE encountered while decoding\";\n\tcase 21: return \"NLEN is not ones complement of LEN in a deflate block\";\n\t\t/*end of out buffer memory reached while inflating:\n\t\tThis can happen if the inflated deflate data is longer than the amount of bytes required to fill up\n\t\tall the pixels of the image, given the color depth and image dimensions. Something that doesn't\n\t\thappen in a normal, well encoded, PNG image.*/\n\tcase 22: return \"end of out buffer memory reached while inflating\";\n\tcase 23: return \"end of in buffer memory reached while inflating\";\n\tcase 24: return \"invalid FCHECK in zlib header\";\n\tcase 25: return \"invalid compression method in zlib header\";\n\tcase 26: return \"FDICT encountered in zlib header while it's not used for PNG\";\n\tcase 27: return \"PNG file is smaller than a PNG header\";\n\t\t/*Checks the magic file header, the first 8 bytes of the PNG file*/\n\tcase 28: return \"incorrect PNG signature, it's no PNG or corrupted\";\n\tcase 29: return \"first chunk is not the header chunk\";\n\tcase 30: return \"chunk length too large, chunk broken off at end of file\";\n\tcase 31: return \"illegal PNG color type or bpp\";\n\tcase 32: return \"illegal PNG compression method\";\n\tcase 33: return \"illegal PNG filter method\";\n\tcase 34: return \"illegal PNG interlace method\";\n\tcase 35: return \"chunk length of a chunk is too large or the chunk too small\";\n\tcase 36: return \"illegal PNG filter type encountered\";\n\tcase 37: return \"illegal bit depth for this color type given\";\n\tcase 38: return \"the palette is too big\"; /*more than 256 colors*/\n\tcase 39: return \"tRNS chunk before PLTE or has more entries than palette size\";\n\tcase 40: return \"tRNS chunk has wrong size for greyscale image\";\n\tcase 41: return \"tRNS chunk has wrong size for RGB image\";\n\tcase 42: return \"tRNS chunk appeared while it was not allowed for this color type\";\n\tcase 43: return \"bKGD chunk has wrong size for palette image\";\n\tcase 44: return \"bKGD chunk has wrong size for greyscale image\";\n\tcase 45: return \"bKGD chunk has wrong size for RGB image\";\n\tcase 48: return \"empty input buffer given to decoder. Maybe caused by non-existing file?\";\n\tcase 49: return \"jumped past memory while generating dynamic huffman tree\";\n\tcase 50: return \"jumped past memory while generating dynamic huffman tree\";\n\tcase 51: return \"jumped past memory while inflating huffman block\";\n\tcase 52: return \"jumped past memory while inflating\";\n\tcase 53: return \"size of zlib data too small\";\n\tcase 54: return \"repeat symbol in tree while there was no value symbol yet\";\n\t\t/*jumped past tree while generating huffman tree, this could be when the\n\t\ttree will have more leaves than symbols after generating it out of the\n\t\tgiven lenghts. They call this an oversubscribed dynamic bit lengths tree in zlib.*/\n\tcase 55: return \"jumped past tree while generating huffman tree\";\n\tcase 56: return \"given output image colortype or bitdepth not supported for color conversion\";\n\tcase 57: return \"invalid CRC encountered (checking CRC can be disabled)\";\n\tcase 58: return \"invalid ADLER32 encountered (checking ADLER32 can be disabled)\";\n\tcase 59: return \"requested color conversion not supported\";\n\tcase 60: return \"invalid window size given in the settings of the encoder (must be 0-32768)\";\n\tcase 61: return \"invalid BTYPE given in the settings of the encoder (only 0, 1 and 2 are allowed)\";\n\t\t/*LodePNG leaves the choice of RGB to greyscale conversion formula to the user.*/\n\tcase 62: return \"conversion from color to greyscale not supported\";\n\tcase 63: return \"length of a chunk too long, max allowed for PNG is 2147483647 bytes per chunk\"; /*(2^31-1)*/\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t /*this would result in the inability of a deflated block to ever contain an end code. It must be at least 1.*/\n\tcase 64: return \"the length of the END symbol 256 in the Huffman tree is 0\";\n\tcase 66: return \"the length of a text chunk keyword given to the encoder is longer than the maximum of 79 bytes\";\n\tcase 67: return \"the length of a text chunk keyword given to the encoder is smaller than the minimum of 1 byte\";\n\tcase 68: return \"tried to encode a PLTE chunk with a palette that has less than 1 or more than 256 colors\";\n\tcase 69: return \"unknown chunk type with 'critical' flag encountered by the decoder\";\n\tcase 71: return \"unexisting interlace mode given to encoder (must be 0 or 1)\";\n\tcase 72: return \"while decoding, unexisting compression method encountering in zTXt or iTXt chunk (it must be 0)\";\n\tcase 73: return \"invalid tIME chunk size\";\n\tcase 74: return \"invalid pHYs chunk size\";\n\t\t/*length could be wrong, or data chopped off*/\n\tcase 75: return \"no null termination char found while decoding text chunk\";\n\tcase 76: return \"iTXt chunk too short to contain required bytes\";\n\tcase 77: return \"integer overflow in buffer size\";\n\tcase 78: return \"failed to open file for reading\"; /*file doesn't exist or couldn't be opened for reading*/\n\tcase 79: return \"failed to open file for writing\";\n\tcase 80: return \"tried creating a tree of 0 symbols\";\n\tcase 81: return \"lazy matching at pos 0 is impossible\";\n\tcase 82: return \"color conversion to palette requested while a color isn't in palette, or index out of bounds\";\n\tcase 83: return \"memory allocation failed\";\n\tcase 84: return \"given image too small to contain all pixels to be encoded\";\n\tcase 86: return \"impossible offset in lz77 encoding (internal bug)\";\n\tcase 87: return \"must provide custom zlib function pointer if LODEPNG_COMPILE_ZLIB is not defined\";\n\tcase 88: return \"invalid filter strategy given for LodePNGEncoderSettings.filter_strategy\";\n\tcase 89: return \"text chunk keyword too short or long: must have size 1-79\";\n\t\t/*the windowsize in the LodePNGCompressSettings. Requiring POT(==> & instead of %) makes encoding 12% faster.*/\n\tcase 90: return \"windowsize must be a power of two\";\n\tcase 91: return \"invalid decompressed idat size\";\n\tcase 92: return \"integer overflow due to too many pixels\";\n\tcase 93: return \"zero width or height is invalid\";\n\tcase 94: return \"header chunk must have a size of 13 bytes\";\n\tcase 95: return \"integer overflow with combined idat chunk size\";\n\tcase 96: return \"invalid gAMA chunk size\";\n\tcase 97: return \"invalid cHRM chunk size\";\n\tcase 98: return \"invalid sRGB chunk size\";\n\tcase 99: return \"invalid sRGB rendering intent\";\n\tcase 100: return \"invalid ICC profile color type, the PNG specification only allows RGB or GRAY\";\n\tcase 101: return \"PNG specification does not allow RGB ICC profile on grey color types and vice versa\";\n\tcase 102: return \"not allowed to set greyscale ICC profile with colored pixels by PNG specification\";\n\tcase 103: return \"Invalid palette index in bKGD chunk. Maybe it came before PLTE chunk?\";\n\tcase 104: return \"Invalid bKGD color while encoding (e.g. palette index out of range)\";\n\t}\n\treturn \"unknown error code\";\n}\n#endif /*LODEPNG_COMPILE_ERROR_TEXT*/\n\n/* ////////////////////////////////////////////////////////////////////////// */\n/* ////////////////////////////////////////////////////////////////////////// */\n/* // C++ Wrapper                                                          // */\n/* ////////////////////////////////////////////////////////////////////////// */\n/* ////////////////////////////////////////////////////////////////////////// */\n\n#ifdef LODEPNG_COMPILE_CPP\nnamespace lodepng\n{\n\n#ifdef LODEPNG_COMPILE_DISK\n\tunsigned load_file(std::vector<unsigned char>& buffer, const std::string& filename)\n\t{\n\t\tlong size = lodepng_filesize(filename.c_str());\n\t\tif (size < 0) return 78;\n\t\tbuffer.resize((size_t)size);\n\t\treturn size == 0 ? 0 : lodepng_buffer_file(&buffer[0], (size_t)size, filename.c_str());\n\t}\n\n\t/*write given buffer to the file, overwriting the file, it doesn't append to it.*/\n\tunsigned save_file(const std::vector<unsigned char>& buffer, const std::string& filename)\n\t{\n\t\treturn lodepng_save_file(buffer.empty() ? 0 : &buffer[0], buffer.size(), filename.c_str());\n\t}\n#endif /* LODEPNG_COMPILE_DISK */\n\n#ifdef LODEPNG_COMPILE_ZLIB\n#ifdef LODEPNG_COMPILE_DECODER\n\tunsigned decompress(std::vector<unsigned char>& out, const unsigned char* in, size_t insize,\n\t\tconst LodePNGDecompressSettings& settings)\n\t{\n\t\tunsigned char* buffer = 0;\n\t\tsize_t buffersize = 0;\n\t\tunsigned error = zlib_decompress(&buffer, &buffersize, in, insize, &settings);\n\t\tif (buffer)\n\t\t{\n\t\t\tout.insert(out.end(), &buffer[0], &buffer[buffersize]);\n\t\t\tlodepng_free(buffer);\n\t\t}\n\t\treturn error;\n\t}\n\n\tunsigned decompress(std::vector<unsigned char>& out, const std::vector<unsigned char>& in,\n\t\tconst LodePNGDecompressSettings& settings)\n\t{\n\t\treturn decompress(out, in.empty() ? 0 : &in[0], in.size(), settings);\n\t}\n#endif /* LODEPNG_COMPILE_DECODER */\n\n#ifdef LODEPNG_COMPILE_ENCODER\n\tunsigned compress(std::vector<unsigned char>& out, const unsigned char* in, size_t insize,\n\t\tconst LodePNGCompressSettings& settings)\n\t{\n\t\tunsigned char* buffer = 0;\n\t\tsize_t buffersize = 0;\n\t\tunsigned error = zlib_compress(&buffer, &buffersize, in, insize, &settings);\n\t\tif (buffer)\n\t\t{\n\t\t\tout.insert(out.end(), &buffer[0], &buffer[buffersize]);\n\t\t\tlodepng_free(buffer);\n\t\t}\n\t\treturn error;\n\t}\n\n\tunsigned compress(std::vector<unsigned char>& out, const std::vector<unsigned char>& in,\n\t\tconst LodePNGCompressSettings& settings)\n\t{\n\t\treturn compress(out, in.empty() ? 0 : &in[0], in.size(), settings);\n\t}\n#endif /* LODEPNG_COMPILE_ENCODER */\n#endif /* LODEPNG_COMPILE_ZLIB */\n\n\n#ifdef LODEPNG_COMPILE_PNG\n\n\tState::State()\n\t{\n\t\tlodepng_state_init(this);\n\t}\n\n\tState::State(const State& other)\n\t{\n\t\tlodepng_state_init(this);\n\t\tlodepng_state_copy(this, &other);\n\t}\n\n\tState::~State()\n\t{\n\t\tlodepng_state_cleanup(this);\n\t}\n\n\tState& State::operator=(const State& other)\n\t{\n\t\tlodepng_state_copy(this, &other);\n\t\treturn *this;\n\t}\n\n#ifdef LODEPNG_COMPILE_DECODER\n\n\tunsigned decode(std::vector<unsigned char>& out, unsigned& w, unsigned& h, const unsigned char* in,\n\t\tsize_t insize, LodePNGColorType colortype, unsigned bitdepth)\n\t{\n\t\tunsigned char* buffer;\n\t\tunsigned error = lodepng_decode_memory(&buffer, &w, &h, in, insize, colortype, bitdepth);\n\t\tif (buffer && !error)\n\t\t{\n\t\t\tState state;\n\t\t\tstate.info_raw.colortype = colortype;\n\t\t\tstate.info_raw.bitdepth = bitdepth;\n\t\t\tsize_t buffersize = lodepng_get_raw_size(w, h, &state.info_raw);\n\t\t\tout.insert(out.end(), &buffer[0], &buffer[buffersize]);\n\t\t\tlodepng_free(buffer);\n\t\t}\n\t\treturn error;\n\t}\n\n\tunsigned decode(std::vector<unsigned char>& out, unsigned& w, unsigned& h,\n\t\tconst std::vector<unsigned char>& in, LodePNGColorType colortype, unsigned bitdepth)\n\t{\n\t\treturn decode(out, w, h, in.empty() ? 0 : &in[0], (unsigned)in.size(), colortype, bitdepth);\n\t}\n\n\tunsigned decode(std::vector<unsigned char>& out, unsigned& w, unsigned& h,\n\t\tState& state,\n\t\tconst unsigned char* in, size_t insize)\n\t{\n\t\tunsigned char* buffer = NULL;\n\t\tunsigned error = lodepng_decode(&buffer, &w, &h, &state, in, insize);\n\t\tif (buffer && !error)\n\t\t{\n\t\t\tsize_t buffersize = lodepng_get_raw_size(w, h, &state.info_raw);\n\t\t\tout.insert(out.end(), &buffer[0], &buffer[buffersize]);\n\t\t}\n\t\tlodepng_free(buffer);\n\t\treturn error;\n\t}\n\n\tunsigned decode(std::vector<unsigned char>& out, unsigned& w, unsigned& h,\n\t\tState& state,\n\t\tconst std::vector<unsigned char>& in)\n\t{\n\t\treturn decode(out, w, h, state, in.empty() ? 0 : &in[0], in.size());\n\t}\n\n#ifdef LODEPNG_COMPILE_DISK\n\tunsigned decode(std::vector<unsigned char>& out, unsigned& w, unsigned& h, const std::string& filename,\n\t\tLodePNGColorType colortype, unsigned bitdepth)\n\t{\n\t\tstd::vector<unsigned char> buffer;\n\t\tunsigned error = load_file(buffer, filename);\n\t\tif (error) return error;\n\t\treturn decode(out, w, h, buffer, colortype, bitdepth);\n\t}\n#endif /* LODEPNG_COMPILE_DECODER */\n#endif /* LODEPNG_COMPILE_DISK */\n\n#ifdef LODEPNG_COMPILE_ENCODER\n\tunsigned encode(std::vector<unsigned char>& out, const unsigned char* in, unsigned w, unsigned h,\n\t\tLodePNGColorType colortype, unsigned bitdepth)\n\t{\n\t\tunsigned char* buffer;\n\t\tsize_t buffersize;\n\t\tunsigned error = lodepng_encode_memory(&buffer, &buffersize, in, w, h, colortype, bitdepth);\n\t\tif (buffer)\n\t\t{\n\t\t\tout.insert(out.end(), &buffer[0], &buffer[buffersize]);\n\t\t\tlodepng_free(buffer);\n\t\t}\n\t\treturn error;\n\t}\n\n\tunsigned encode(std::vector<unsigned char>& out,\n\t\tconst std::vector<unsigned char>& in, unsigned w, unsigned h,\n\t\tLodePNGColorType colortype, unsigned bitdepth)\n\t{\n\t\tif (lodepng_get_raw_size_lct(w, h, colortype, bitdepth) > in.size()) return 84;\n\t\treturn encode(out, in.empty() ? 0 : &in[0], w, h, colortype, bitdepth);\n\t}\n\n\tunsigned encode(std::vector<unsigned char>& out,\n\t\tconst unsigned char* in, unsigned w, unsigned h,\n\t\tState& state)\n\t{\n\t\tunsigned char* buffer;\n\t\tsize_t buffersize;\n\t\tunsigned error = lodepng_encode(&buffer, &buffersize, in, w, h, &state);\n\t\tif (buffer)\n\t\t{\n\t\t\tout.insert(out.end(), &buffer[0], &buffer[buffersize]);\n\t\t\tlodepng_free(buffer);\n\t\t}\n\t\treturn error;\n\t}\n\n\tunsigned encode(std::vector<unsigned char>& out,\n\t\tconst std::vector<unsigned char>& in, unsigned w, unsigned h,\n\t\tState& state)\n\t{\n\t\tif (lodepng_get_raw_size(w, h, &state.info_raw) > in.size()) return 84;\n\t\treturn encode(out, in.empty() ? 0 : &in[0], w, h, state);\n\t}\n\n#ifdef LODEPNG_COMPILE_DISK\n\tunsigned encode(const std::string& filename,\n\t\tconst unsigned char* in, unsigned w, unsigned h,\n\t\tLodePNGColorType colortype, unsigned bitdepth)\n\t{\n\t\tstd::vector<unsigned char> buffer;\n\t\tunsigned error = encode(buffer, in, w, h, colortype, bitdepth);\n\t\tif (!error) error = save_file(buffer, filename);\n\t\treturn error;\n\t}\n\n\tunsigned encode(const std::string& filename,\n\t\tconst std::vector<unsigned char>& in, unsigned w, unsigned h,\n\t\tLodePNGColorType colortype, unsigned bitdepth)\n\t{\n\t\tif (lodepng_get_raw_size_lct(w, h, colortype, bitdepth) > in.size()) return 84;\n\t\treturn encode(filename, in.empty() ? 0 : &in[0], w, h, colortype, bitdepth);\n\t}\n#endif /* LODEPNG_COMPILE_DISK */\n#endif /* LODEPNG_COMPILE_ENCODER */\n#endif /* LODEPNG_COMPILE_PNG */\n} /* namespace lodepng */\n#endif /*LODEPNG_COMPILE_CPP*/\n"
  },
  {
    "path": "PinBox/PinBox/source/main.cpp",
    "content": "﻿//===============================================================================\r\n// enable this for debug log console on top\r\n//===============================================================================\r\n//define __CONSOLE_DEBUGING__\r\n//===============================================================================\r\n\r\n#include <3ds.h>\r\n#include <string.h>\r\n#include <stdio.h>\r\n#include <stdlib.h>\r\n#include <malloc.h>\r\n#include <arpa/inet.h>\r\n#include \"ConfigManager.h\"\r\n#include \"PPGraphics.h\"\r\n#include \"PPSessionManager.h\"\r\n\r\n#include \"PPUI.h\"\r\n#include \"PPAudio.h\"\r\n#include \"constant.h\"\r\n\r\nvoid initDbgConsole()\r\n{\r\n#ifdef CONSOLE_DEBUG\r\n\tconsoleInit(GFX_TOP, NULL);\r\n\tprintf(\"Console log initialized\\n\");\r\n#endif\r\n}\r\n\r\nstatic u32 *SOC_buffer = NULL;\r\n\r\n\r\nint main()\r\n{\r\n\t//---------------------------------------------\r\n\t// Init svc\r\n\t//---------------------------------------------\r\n\tacInit();\r\n\taptInit();\r\n\tirrstInit();\r\n\tResult rc = romfsInit();\r\n\tif (rc) printf(\"romfsInit: %08lX\\n\", rc);\r\n\telse printf(\"romfs Init Successful!\\n\");\r\n\r\n\tAPT_SetAppCpuTimeLimit(80);\r\n\t//---------------------------------------------\r\n\t// Init Graphics\r\n\t//---------------------------------------------\r\n\tPPGraphics::Get()->GraphicsInit();\r\n\tPPAudio::Get()->AudioInit();\r\n\tinitDbgConsole();\r\n\tPPUI::InitResource();\r\n\t//---------------------------------------------\r\n\t// Init SOCKET\r\n\t//---------------------------------------------\r\n\tSOC_buffer = (u32*)memalign(SOC_ALIGN, SOC_BUFFERSIZE);\r\n\tu32 ret = socInit(SOC_buffer, SOC_BUFFERSIZE);\r\n\tif (ret != 0)\r\n\t{\r\n\t\treturn 0;\r\n\t}\r\n\t//---------------------------------------------\r\n\t// Init config\r\n\t//---------------------------------------------\r\n\tConfigManager::Get()->InitConfig();\r\n\t//---------------------------------------------\r\n\t// Init session manager\r\n\t//---------------------------------------------\r\n\tPPSessionManager* sm = new PPSessionManager();\r\n\t//---------------------------------------------\r\n\t// Wifi Check\r\n\t//---------------------------------------------\r\n\tu32 wifiStatus = 0;\r\n#ifndef USE_CITRA\r\n\twhile (aptMainLoop()) {\r\n\t\tACU_GetWifiStatus(&wifiStatus);\r\n\t\tif (wifiStatus) break;\r\n\r\n\t\t//---------------------------------------------\r\n\t\t// Update Input\r\n\t\t//---------------------------------------------\r\n\t\thidScanInput();\r\n\t\tirrstScanInput();\r\n\t\tPPUI::UpdateInput();\r\n\r\n\t\t//---------------------------------------------\r\n\t\t// Draw UI\r\n\t\t//---------------------------------------------\r\n\t\tPPGraphics::Get()->BeginRender();\r\n\t\tPPGraphics::Get()->RenderOn(GFX_BOTTOM);\r\n\t\tint ret = PPUI::DrawBtmServerSelectScreen(sm);\r\n\t\tPPGraphics::Get()->EndRender();\r\n\r\n\t\tif (ret == -1) break;\r\n\t}\r\n#else\r\n\twifiStatus = 1;\r\n#endif\r\n\t//---------------------------------------------\r\n\t// wifiStatus = 0 : not connected to internet\r\n\t// wifiStatus = 1 : Old 3DS internet\r\n\t// wifiStatus = 2 : New 3DS internet\r\n\t//---------------------------------------------\r\n\tif (wifiStatus) {\r\n\t\tosSetSpeedupEnable(1);\r\n\r\n\r\n\t\t//---------------------------------------------\r\n\t\t// Main loop\r\n\t\t//---------------------------------------------\r\n\t\twhile (aptMainLoop())\r\n\t\t{\r\n\t\t\thidScanInput();\r\n\t\t\tirrstScanInput();\r\n\t\t\t//---------------------------------------------\r\n\t\t\t// Update Input\r\n\t\t\t//---------------------------------------------\r\n\t\t\tPPUI::UpdateInput();\r\n\t\t\tsm->UpdateInputStream(PPUI::getKeyHold(), PPUI::getKeyUp(), \r\n\t\t\t\tPPUI::getLeftCircle().dx, PPUI::getLeftCircle().dy, \r\n\t\t\t\tPPUI::getRightCircle().dx, PPUI::getRightCircle().dy);\r\n\r\n\t\t\t//---------------------------------------------\r\n\t\t\t// Update Frame\r\n\t\t\t//---------------------------------------------\r\n\t\t\tPPGraphics::Get()->BeginRender();\r\n\t\t\t//---------------------------------------------\r\n\t\t\t// Draw top UI\r\n\t\t\t//---------------------------------------------\r\n#ifndef CONSOLE_DEBUG\r\n\t\t\tPPGraphics::Get()->RenderOn(GFX_TOP);\r\n\t\t\tif(sm->GetSessionState() == 2)\r\n\t\t\t{\r\n\t\t\t\tPPGraphics::Get()->DrawTopScreenSprite();\r\n\t\t\t}else\r\n\t\t\t{\r\n\t\t\t\tPPUI::DrawIdleTopScreen(sm);\r\n\t\t\t}\r\n#endif\r\n\t\t\t//---------------------------------------------\r\n\t\t\t// Draw bottom UI\r\n\t\t\t//---------------------------------------------\r\n\t\t\tPPGraphics::Get()->RenderOn(GFX_BOTTOM);\r\n\t\t\tint r = 0;\r\n\t\t\tif (PPUI::HasPopup()) r = PPUI::GetPopup()();\r\n\t\t\telse\r\n\t\t\t{\r\n\t\t\t\tswitch (sm->GetSessionState())\r\n\t\t\t\t{\r\n\t\t\t\tcase SS_NOT_CONNECTED:\r\n\t\t\t\tcase SS_CONNECTING:\r\n\t\t\t\tcase SS_CONNECTED:\r\n\t\t\t\tcase SS_FAILED:\r\n\t\t\t\t\tr = PPUI::DrawBtmServerSelectScreen(sm);\r\n\t\t\t\t\tbreak;\r\n\t\t\t\tcase SS_PAIRED: \r\n\t\t\t\tcase SS_STREAMING:\r\n\t\t\t\t\tr = PPUI::DrawBtmPairedScreen(sm);\r\n\t\t\t\t\tbreak;\r\n\t\t\t\tdefault:\r\n\t\t\t\t\tbreak;\r\n\t\t\t\t}\r\n\t\t\t\t\r\n\t\t\t}\r\n\t\t\tPPGraphics::Get()->EndRender();\r\n\r\n\t\t\tif (r == -1) {\r\n\r\n\t\t\t\tbreak;\r\n\t\t\t}\r\n\t\t}\r\n\t}\r\n\t\t\r\n\tdelete sm;\r\n\r\n\tPPUI::CleanupResource();\r\n\tPPGraphics::Get()->GraphicExit();\r\n\tPPAudio::Get()->AudioExit();\r\n\tConfigManager::Get()->Destroy();\r\n\tirrstExit();\r\n\tsocExit();\r\n\tfree(SOC_buffer);\r\n\tromfsExit();\r\n\taptExit();\r\n\tacExit();\r\n\r\n\treturn 0;\r\n}"
  },
  {
    "path": "PinBox/PinBox/source/vshader.v.pica",
    "content": "; Uniforms\n.fvec projection[4]\n\n; Constants\n.constf RGBA8_TO_FLOAT4(0.00392156862, 0, 0, 0)\n.constf ONES(1.0, 1.0, 1.0, 1.0)\n\n; Outputs\n.out outpos position\n.out outtc0 texcoord0\n.out outclr color\n\n; Inputs (defined as aliases for convenience)\n.alias inpos v0\n.alias inarg v1\n\n.proc main\n\n\t; outpos = projection * in.pos\r\n\tdp4 outpos.x, projection[0], inpos\r\n\tdp4 outpos.y, projection[1], inpos\r\n\tdp4 outpos.z, projection[2], inpos\r\n\tdp4 outpos.w, projection[3], inpos\n\n\t; outtc0 = in.texcoord\n \tmov outtc0, inarg\n\n\t; outclr = RGBA8_TO_FLOAT4(in.color)\r\n\tmul outclr, RGBA8_TO_FLOAT4.xxxx, inarg\n\n\tend\n.end\n"
  },
  {
    "path": "PinBox/PinBox/source/yuv_rgb.c",
    "content": "// Copyright 2016 Adrien Descamps\n// Distributed under BSD 3-Clause License\n\n#include \"yuv_rgb.h\"\n\n//#include <x86intrin.h>\n\n#include <stdio.h>\n\n\nuint8_t clamp(int16_t value)\n{\n\treturn value<0 ? 0 : (value>255 ? 255 : value);\n}\n\n// Definitions\n//\n// E'R, E'G, E'B, E'Y, E'Cb and E'Cr refer to the analog signals\n// E'R, E'G, E'B and E'Y range is [0:1], while E'Cb and E'Cr range is [-0.5:0.5]\n// R, G, B, Y, Cb and Cr refer to the digitalized values\n// The digitalized values can use their full range ([0:255] for 8bit values),\n// or a subrange (typically [16:235] for Y and [16:240] for CbCr).\n// We assume here that RGB range is always [0:255], since it is the case for \n// most digitalized images.\n// For 8bit values :\n// * Y = round((YMax-YMin)*E'Y + YMin)\n// * Cb = round((CbRange)*E'Cb + 128)\n// * Cr = round((CrRange)*E'Cr + 128)\n// Where *Min and *Max are the range of each channel\n//\n// In the analog domain , the RGB to YCbCr transformation is defined as:\n// * E'Y = Rf*E'R + Gf*E'G + Bf*E'B\n// Where Rf, Gf and Bf are constants defined in each standard, with \n// Rf + Gf + Bf = 1 (necessary to ensure that E'Y range is [0:1])\n// * E'Cb = (E'B - E'Y) / CbNorm\n// * E'Cr = (E'R - E'Y) / CrNorm\n// Where CbNorm and CrNorm are constants, dependent of Rf, Gf, Bf, computed \n// to normalize to a [-0.5:0.5] range : CbNorm=2*(1-Bf) and CrNorm=2*(1-Rf)\n//\n// Algorithms\n//\n// Most operations will be made in a fixed point format for speed, using \n// N bits of precision. In next section the [x] convention is used for \n// a fixed point rounded value, that is (int being the c type conversion)\n// * [x] = int(x*(2^N)+0.5)\n// N can be different for each factor, we simply use the highest value\n// that will not overflow in 16 bits intermediate variables.\n//.\n// For RGB to YCbCr conversion, we start by generating a pseudo Y value \n// (noted Y') in fixed point format, using the full range for now.\n// * Y' = ([Rf]*R + [Gf]*G + [Bf]*B)>>N\n// We can then compute Cb and Cr by\n// * Cb = ((B - Y')*[CbRange/(255*CbNorm)])>>N + 128\n// * Cr = ((R - Y')*[CrRange/(255*CrNorm)])>>N + 128\n// And finally, we normalize Y to its digital range\n// * Y = (Y'*[(YMax-YMin)/255])>>N + YMin\n// \n// For YCbCr to RGB conversion, we first compute the full range Y' value :\n// * Y' = ((Y-YMin)*[255/(YMax-YMin)])>>N\n// We can then compute B and R values by :\n// * B = ((Cb-128)*[(255*CbNorm)/CbRange])>>N + Y'\n// * R = ((Cr-128)*[(255*CrNorm)/CrRange])>>N + Y'\n// And finally, for G we know that:\n// * G = (Y' - (Rf*R + Bf*B)) / Gf\n// From above:\n// * G = (Y' - Rf * ((Cr-128)*(255*CrNorm)/CrRange + Y') - Bf * ((Cb-128)*(255*CbNorm)/CbRange + Y')) / Gf\n// Since 1-Rf-Bf=Gf, we can take Y' out of the division by Gf, and we get:\n// * G = Y' - (Cr-128)*Rf/Gf*(255*CrNorm)/CrRange - (Cb-128)*Bf/Gf*(255*CbNorm)/CbRange\n// That we can compute, with fixed point arithmetic, by\n// * G = Y' - ((Cr-128)*[Rf/Gf*(255*CrNorm)/CrRange] + (Cb-128)*[Bf/Gf*(255*CbNorm)/CbRange])>>N\n// \n// Note : in ITU-T T.871(JPEG), Y=Y', so that part could be optimized out\n\n\n#define FIXED_POINT_VALUE(value, precision) ((int)(((value)*(1<<precision))+0.5))\n\n// see above for description\ntypedef struct\n{\n\tuint8_t r_factor;    // [Rf]\n\tuint8_t g_factor;    // [Rg]\n\tuint8_t b_factor;    // [Rb]\n\tuint8_t cb_factor;   // [CbRange/(255*CbNorm)]\n\tuint8_t cr_factor;   // [CrRange/(255*CrNorm)]\n\tuint8_t y_factor;    // [(YMax-YMin)/255]\n\tuint8_t y_offset;    // YMin\n} RGB2YUVParam;\n\ntypedef struct\n{\n\tuint8_t cb_factor;   // [(255*CbNorm)/CbRange]\n\tuint8_t cr_factor;   // [(255*CrNorm)/CrRange]\n\tuint8_t g_cb_factor; // [Bf/Gf*(255*CbNorm)/CbRange]\n\tuint8_t g_cr_factor; // [Rf/Gf*(255*CrNorm)/CrRange]\n\tuint8_t y_factor;    // [(YMax-YMin)/255]\n\tuint8_t y_offset;    // YMin\n} YUV2RGBParam;\n\n#define RGB2YUV_PARAM(Rf, Bf, YMin, YMax, CbCrRange) \\\n{.r_factor=FIXED_POINT_VALUE(Rf, 8), \\\n.g_factor=256-FIXED_POINT_VALUE(Rf, 8)-FIXED_POINT_VALUE(Bf, 8), \\\n.b_factor=FIXED_POINT_VALUE(Bf, 8), \\\n.cb_factor=FIXED_POINT_VALUE((CbCrRange/255.0)/(2.0*(1-Bf)), 8), \\\n.cr_factor=FIXED_POINT_VALUE((CbCrRange/255.0)/(2.0*(1-Rf)), 8), \\\n.y_factor=FIXED_POINT_VALUE((YMax-YMin)/255.0, 7), \\\n.y_offset=YMin}\n\n#define YUV2RGB_PARAM(Rf, Bf, YMin, YMax, CbCrRange) \\\n{.cb_factor=FIXED_POINT_VALUE(255.0*(2.0*(1-Bf))/CbCrRange, 6), \\\n.cr_factor=FIXED_POINT_VALUE(255.0*(2.0*(1-Rf))/CbCrRange, 6), \\\n.g_cb_factor=FIXED_POINT_VALUE(Bf/(1.0-Bf-Rf)*255.0*(2.0*(1-Bf))/CbCrRange, 7), \\\n.g_cr_factor=FIXED_POINT_VALUE(Rf/(1.0-Bf-Rf)*255.0*(2.0*(1-Rf))/CbCrRange, 7), \\\n.y_factor=FIXED_POINT_VALUE(255.0/(YMax-YMin), 7), \\\n.y_offset=YMin}\n\nstatic const RGB2YUVParam RGB2YUV[3] = {\n\t// ITU-T T.871 (JPEG)\n\tRGB2YUV_PARAM(0.299, 0.114, 0.0, 255.0, 255.0),\n\t// ITU-R BT.601-7\n\tRGB2YUV_PARAM(0.299, 0.114, 16.0, 235.0, 224.0),\n\t// ITU-R BT.709-6\n\tRGB2YUV_PARAM(0.2126, 0.0722, 16.0, 235.0, 224.0)\n};\n\nstatic const YUV2RGBParam YUV2RGB[3] = {\n\t// ITU-T T.871 (JPEG)\n\tYUV2RGB_PARAM(0.299, 0.114, 0.0, 255.0, 255.0),\n\t// ITU-R BT.601-7\n\tYUV2RGB_PARAM(0.299, 0.114, 16.0, 235.0, 224.0),\n\t// ITU-R BT.709-6\n\tYUV2RGB_PARAM(0.2126, 0.0722, 16.0, 235.0, 224.0)\n};\n\n\nvoid rgb24_yuv420_std(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *RGB, uint32_t RGB_stride, \n\tuint8_t *Y, uint8_t *U, uint8_t *V, uint32_t Y_stride, uint32_t UV_stride, \n\tYCbCrType yuv_type)\n{\n\tconst RGB2YUVParam *const param = &(RGB2YUV[yuv_type]);\n\t\n\tuint32_t x, y;\n\tfor(y=0; y<(height-1); y+=2)\n\t{\n\t\tconst uint8_t *rgb_ptr1=RGB+y*RGB_stride,\n\t\t\t*rgb_ptr2=RGB+(y+1)*RGB_stride;\n\t\t\n\t\tuint8_t *y_ptr1=Y+y*Y_stride,\n\t\t\t*y_ptr2=Y+(y+1)*Y_stride,\n\t\t\t*u_ptr=U+(y/2)*UV_stride,\n\t\t\t*v_ptr=V+(y/2)*UV_stride;\n\t\t\n\t\tfor(x=0; x<(width-1); x+=2)\n\t\t{\n\t\t\t// compute yuv for the four pixels, u and v values are summed\n\t\t\tuint8_t y_tmp;\n\t\t\tint16_t u_tmp, v_tmp;\n\t\t\t\n\t\t\ty_tmp = (param->r_factor*rgb_ptr1[0] + param->g_factor*rgb_ptr1[1] + param->b_factor*rgb_ptr1[2])>>8;\n\t\t\tu_tmp = rgb_ptr1[2]-y_tmp;\n\t\t\tv_tmp = rgb_ptr1[0]-y_tmp;\n\t\t\ty_ptr1[0]=((y_tmp*param->y_factor)>>7) + param->y_offset;\n\t\t\t\n\t\t\ty_tmp = (param->r_factor*rgb_ptr1[3] + param->g_factor*rgb_ptr1[4] + param->b_factor*rgb_ptr1[5])>>8;\n\t\t\tu_tmp += rgb_ptr1[5]-y_tmp;\n\t\t\tv_tmp += rgb_ptr1[3]-y_tmp;\n\t\t\ty_ptr1[1]=((y_tmp*param->y_factor)>>7) + param->y_offset;\n\n\t\t\ty_tmp = (param->r_factor*rgb_ptr2[0] + param->g_factor*rgb_ptr2[1] + param->b_factor*rgb_ptr2[2])>>8;\n\t\t\tu_tmp += rgb_ptr2[2]-y_tmp;\n\t\t\tv_tmp += rgb_ptr2[0]-y_tmp;\n\t\t\ty_ptr2[0]=((y_tmp*param->y_factor)>>7) + param->y_offset;\n\t\t\t\n\t\t\ty_tmp = (param->r_factor*rgb_ptr2[3] + param->g_factor*rgb_ptr2[4] + param->b_factor*rgb_ptr2[5])>>8;\n\t\t\tu_tmp += rgb_ptr2[5]-y_tmp;\n\t\t\tv_tmp += rgb_ptr2[3]-y_tmp;\n\t\t\ty_ptr2[1]=((y_tmp*param->y_factor)>>7) + param->y_offset;\n\n\t\t\tu_ptr[0] = (((u_tmp>>2)*param->cb_factor)>>8) + 128;\n\t\t\tv_ptr[0] = (((v_tmp>>2)*param->cb_factor)>>8) + 128;\n\t\t\t\n\t\t\trgb_ptr1 += 6;\n\t\t\trgb_ptr2 += 6;\n\t\t\ty_ptr1 += 2;\n\t\t\ty_ptr2 += 2;\n\t\t\tu_ptr += 1;\n\t\t\tv_ptr += 1;\n\t\t}\n\t}\n}\n\n\nvoid yuv420_rgb24_std(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *Y, const uint8_t *U, const uint8_t *V, uint32_t Y_stride, uint32_t UV_stride, \n\tuint8_t *RGB, uint32_t RGB_stride, \n\tYCbCrType yuv_type)\n{\n\tconst YUV2RGBParam *const param = &(YUV2RGB[yuv_type]);\n\tuint32_t x, y;\n\tfor(y=0; y<(height-1); y+=2)\n\t{\n\t\tconst uint8_t *y_ptr1=Y+y*Y_stride,\n\t\t\t*y_ptr2=Y+(y+1)*Y_stride,\n\t\t\t*u_ptr=U+(y/2)*UV_stride,\n\t\t\t*v_ptr=V+(y/2)*UV_stride;\n\t\t\n\t\tuint8_t *rgb_ptr1=RGB+y*RGB_stride,\n\t\t\t*rgb_ptr2=RGB+(y+1)*RGB_stride;\n\t\t\n\t\tfor(x=0; x<(width-1); x+=2)\n\t\t{\n\t\t\tint8_t u_tmp, v_tmp;\n\t\t\tu_tmp = u_ptr[0]-128;\n\t\t\tv_tmp = v_ptr[0]-128;\n\t\t\t\n\t\t\t//compute Cb Cr color offsets, common to four pixels\n\t\t\tint16_t b_cb_offset, r_cr_offset, g_cbcr_offset;\n\t\t\tb_cb_offset = (param->cb_factor*u_tmp)>>6;\n\t\t\tr_cr_offset = (param->cr_factor*v_tmp)>>6;\n\t\t\tg_cbcr_offset = (param->g_cb_factor*u_tmp + param->g_cr_factor*v_tmp)>>7;\n\t\t\t\n\t\t\tint16_t y_tmp;\n\t\t\ty_tmp = (param->y_factor*(y_ptr1[0]-param->y_offset))>>7;\n\t\t\trgb_ptr1[2] = clamp(y_tmp + r_cr_offset);\n\t\t\trgb_ptr1[1] = clamp(y_tmp - g_cbcr_offset);\n\t\t\trgb_ptr1[0] = clamp(y_tmp + b_cb_offset);\n\t\t\t\n\t\t\ty_tmp = (param->y_factor*(y_ptr1[1]-param->y_offset))>>7;\n\t\t\trgb_ptr1[5] = clamp(y_tmp + r_cr_offset);\n\t\t\trgb_ptr1[4] = clamp(y_tmp - g_cbcr_offset);\n\t\t\trgb_ptr1[3] = clamp(y_tmp + b_cb_offset);\n\t\t\t\n\t\t\ty_tmp = (param->y_factor*(y_ptr2[0]-param->y_offset))>>7;\n\t\t\trgb_ptr2[2] = clamp(y_tmp + r_cr_offset);\n\t\t\trgb_ptr2[1] = clamp(y_tmp - g_cbcr_offset);\n\t\t\trgb_ptr2[0] = clamp(y_tmp + b_cb_offset);\n\t\t\t\n\t\t\ty_tmp = (param->y_factor*(y_ptr2[1]-param->y_offset))>>7;\n\t\t\trgb_ptr2[5] = clamp(y_tmp + r_cr_offset);\n\t\t\trgb_ptr2[4] = clamp(y_tmp - g_cbcr_offset);\n\t\t\trgb_ptr2[3] = clamp(y_tmp + b_cb_offset);\n\t\t\t\n\t\t\trgb_ptr1 += 6;\n\t\t\trgb_ptr2 += 6;\n\t\t\ty_ptr1 += 2;\n\t\t\ty_ptr2 += 2;\n\t\t\tu_ptr += 1;\n\t\t\tv_ptr += 1;\n\t\t}\n\t}\n}\n\n#ifdef __SSE2__\n\n//see rgb.txt\n#define UNPACK_RGB24_32_STEP(RS1, RS2, RS3, RS4, RS5, RS6, RD1, RD2, RD3, RD4, RD5, RD6) \\\nRD1 = _mm_unpacklo_epi8(RS1, RS4); \\\nRD2 = _mm_unpackhi_epi8(RS1, RS4); \\\nRD3 = _mm_unpacklo_epi8(RS2, RS5); \\\nRD4 = _mm_unpackhi_epi8(RS2, RS5); \\\nRD5 = _mm_unpacklo_epi8(RS3, RS6); \\\nRD6 = _mm_unpackhi_epi8(RS3, RS6);\n\n#define RGB2YUV_16(R, G, B, Y, U, V) \\\nY = _mm_add_epi16(_mm_mullo_epi16(R, _mm_set1_epi16(param->r_factor)), \\\n                  _mm_mullo_epi16(G, _mm_set1_epi16(param->g_factor))); \\\nY = _mm_add_epi16(Y, _mm_mullo_epi16(B, _mm_set1_epi16(param->b_factor))); \\\nY = _mm_srli_epi16(Y, 8); \\\nU = _mm_mullo_epi16(_mm_sub_epi16(B, Y), _mm_set1_epi16(param->cb_factor)); \\\nU = _mm_add_epi16(_mm_srai_epi16(U, 8), _mm_set1_epi16(128)); \\\nV = _mm_mullo_epi16(_mm_sub_epi16(R, Y), _mm_set1_epi16(param->cr_factor)); \\\nV = _mm_add_epi16(_mm_srai_epi16(V, 8), _mm_set1_epi16(128)); \\\nY = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(Y, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset));\n\n#define RGB2YUV_32 \\\n\t__m128i r_16, g_16, b_16; \\\n\t__m128i y1_16, y2_16, cb1_16, cb2_16, cr1_16, cr2_16, Y, cb, cr; \\\n\t__m128i tmp1, tmp2, tmp3, tmp4, tmp5, tmp6; \\\n\t__m128i rgb1 = LOAD_SI128((const __m128i*)(rgb_ptr1)), \\\n\t\trgb2 = LOAD_SI128((const __m128i*)(rgb_ptr1+16)), \\\n\t\trgb3 = LOAD_SI128((const __m128i*)(rgb_ptr1+32)), \\\n\t\trgb4 = LOAD_SI128((const __m128i*)(rgb_ptr2)), \\\n\t\trgb5 = LOAD_SI128((const __m128i*)(rgb_ptr2+16)), \\\n\t\trgb6 = LOAD_SI128((const __m128i*)(rgb_ptr2+32)); \\\n\t/* unpack rgb24 data to r, g and b data in separate channels*/ \\\n\t/* see rgb.txt to get an idea of the algorithm, note that we only go to the next to last step*/ \\\n\t/* here, because averaging in horizontal direction is easier like this*/ \\\n\t/* The last step is applied further on the Y channel only*/ \\\n\tUNPACK_RGB24_32_STEP(rgb1, rgb2, rgb3, rgb4, rgb5, rgb6, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6) \\\n\tUNPACK_RGB24_32_STEP(tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, rgb1, rgb2, rgb3, rgb4, rgb5, rgb6) \\\n\tUNPACK_RGB24_32_STEP(rgb1, rgb2, rgb3, rgb4, rgb5, rgb6, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6) \\\n\tUNPACK_RGB24_32_STEP(tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, rgb1, rgb2, rgb3, rgb4, rgb5, rgb6) \\\n\t/* first compute Y', (B-Y') and (R-Y'), in 16bits values, for the first line */ \\\n\t/* Y is saved for each pixel, while only sums of (B-Y') and (R-Y') for pairs of adjacents pixels are saved*/ \\\n\tr_16 = _mm_unpacklo_epi8(rgb1, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpacklo_epi8(rgb2, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpacklo_epi8(rgb3, _mm_setzero_si128()); \\\n\ty1_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty1_16 = _mm_add_epi16(y1_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty1_16 = _mm_srli_epi16(y1_16, 8); \\\n\tcb1_16 = _mm_sub_epi16(b_16, y1_16); \\\n\tcr1_16 = _mm_sub_epi16(r_16, y1_16); \\\n\tr_16 = _mm_unpacklo_epi8(rgb4, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpacklo_epi8(rgb5, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpacklo_epi8(rgb6, _mm_setzero_si128()); \\\n\ty2_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty2_16 = _mm_add_epi16(y2_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty2_16 = _mm_srli_epi16(y2_16, 8); \\\n\tcb1_16 = _mm_add_epi16(cb1_16, _mm_sub_epi16(b_16, y2_16)); \\\n\tcr1_16 = _mm_add_epi16(cr1_16, _mm_sub_epi16(r_16, y2_16)); \\\n\t/* Rescale Y' to Y, pack it to 8bit values and save it */ \\\n\ty1_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y1_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\ty2_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y2_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\tY = _mm_packus_epi16(y1_16, y2_16); \\\n\tY = _mm_unpackhi_epi8(_mm_slli_si128(Y, 8), Y); \\\n\tSAVE_SI128((__m128i*)(y_ptr1), Y); \\\n\t/* same for the second line, compute Y', (B-Y') and (R-Y'), in 16bits values */ \\\n\t/* Y is saved for each pixel, while only sums of (B-Y') and (R-Y') for pairs of adjacents pixels are added to the previous values*/ \\\n\tr_16 = _mm_unpackhi_epi8(rgb1, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpackhi_epi8(rgb2, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpackhi_epi8(rgb3, _mm_setzero_si128()); \\\n\ty1_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty1_16 = _mm_add_epi16(y1_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty1_16 = _mm_srli_epi16(y1_16, 8); \\\n\tcb1_16 = _mm_add_epi16(cb1_16, _mm_sub_epi16(b_16, y1_16)); \\\n\tcr1_16 = _mm_add_epi16(cr1_16, _mm_sub_epi16(r_16, y1_16)); \\\n\tr_16 = _mm_unpackhi_epi8(rgb4, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpackhi_epi8(rgb5, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpackhi_epi8(rgb6, _mm_setzero_si128()); \\\n\ty2_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty2_16 = _mm_add_epi16(y2_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty2_16 = _mm_srli_epi16(y2_16, 8); \\\n\tcb1_16 = _mm_add_epi16(cb1_16, _mm_sub_epi16(b_16, y2_16)); \\\n\tcr1_16 = _mm_add_epi16(cr1_16, _mm_sub_epi16(r_16, y2_16)); \\\n\t/* Rescale Y' to Y, pack it to 8bit values and save it */ \\\n\ty1_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y1_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\ty2_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y2_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\tY = _mm_packus_epi16(y1_16, y2_16); \\\n\tY = _mm_unpackhi_epi8(_mm_slli_si128(Y, 8), Y); \\\n\tSAVE_SI128((__m128i*)(y_ptr2), Y); \\\n\t/* Rescale Cb and Cr to their final range */ \\\n\tcb1_16 = _mm_add_epi16(_mm_srai_epi16(_mm_mullo_epi16(_mm_srai_epi16(cb1_16, 2), _mm_set1_epi16(param->cb_factor)), 8), _mm_set1_epi16(128)); \\\n\tcr1_16 = _mm_add_epi16(_mm_srai_epi16(_mm_mullo_epi16(_mm_srai_epi16(cr1_16, 2), _mm_set1_epi16(param->cr_factor)), 8), _mm_set1_epi16(128)); \\\n\t\\\n\t/* do the same again with next data */ \\\n\trgb1 = LOAD_SI128((const __m128i*)(rgb_ptr1+48)), \\\n\trgb2 = LOAD_SI128((const __m128i*)(rgb_ptr1+64)), \\\n\trgb3 = LOAD_SI128((const __m128i*)(rgb_ptr1+80)), \\\n\trgb4 = LOAD_SI128((const __m128i*)(rgb_ptr2+48)), \\\n\trgb5 = LOAD_SI128((const __m128i*)(rgb_ptr2+64)), \\\n\trgb6 = LOAD_SI128((const __m128i*)(rgb_ptr2+80)); \\\n\t/* unpack rgb24 data to r, g and b data in separate channels*/ \\\n\t/* see rgb.txt to get an idea of the algorithm, note that we only go to the next to last step*/ \\\n\t/* here, because averaging in horizontal direction is easier like this*/ \\\n\t/* The last step is applied further on the Y channel only*/ \\\n\tUNPACK_RGB24_32_STEP(rgb1, rgb2, rgb3, rgb4, rgb5, rgb6, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6) \\\n\tUNPACK_RGB24_32_STEP(tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, rgb1, rgb2, rgb3, rgb4, rgb5, rgb6) \\\n\tUNPACK_RGB24_32_STEP(rgb1, rgb2, rgb3, rgb4, rgb5, rgb6, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6) \\\n\tUNPACK_RGB24_32_STEP(tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, rgb1, rgb2, rgb3, rgb4, rgb5, rgb6) \\\n\t/* first compute Y', (B-Y') and (R-Y'), in 16bits values, for the first line */ \\\n\t/* Y is saved for each pixel, while only sums of (B-Y') and (R-Y') for pairs of adjacents pixels are saved*/ \\\n\tr_16 = _mm_unpacklo_epi8(rgb1, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpacklo_epi8(rgb2, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpacklo_epi8(rgb3, _mm_setzero_si128()); \\\n\ty1_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty1_16 = _mm_add_epi16(y1_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty1_16 = _mm_srli_epi16(y1_16, 8); \\\n\tcb2_16 = _mm_sub_epi16(b_16, y1_16); \\\n\tcr2_16 = _mm_sub_epi16(r_16, y1_16); \\\n\tr_16 = _mm_unpacklo_epi8(rgb4, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpacklo_epi8(rgb5, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpacklo_epi8(rgb6, _mm_setzero_si128()); \\\n\ty2_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty2_16 = _mm_add_epi16(y2_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty2_16 = _mm_srli_epi16(y2_16, 8); \\\n\tcb2_16 = _mm_add_epi16(cb2_16, _mm_sub_epi16(b_16, y2_16)); \\\n\tcr2_16 = _mm_add_epi16(cr2_16, _mm_sub_epi16(r_16, y2_16)); \\\n\t/* Rescale Y' to Y, pack it to 8bit values and save it */ \\\n\ty1_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y1_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\ty2_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y2_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\tY = _mm_packus_epi16(y1_16, y2_16); \\\n\tY = _mm_unpackhi_epi8(_mm_slli_si128(Y, 8), Y); \\\n\tSAVE_SI128((__m128i*)(y_ptr1+16), Y); \\\n\t/* same for the second line, compute Y', (B-Y') and (R-Y'), in 16bits values */ \\\n\t/* Y is saved for each pixel, while only sums of (B-Y') and (R-Y') for pairs of adjacents pixels are added to the previous values*/ \\\n\tr_16 = _mm_unpackhi_epi8(rgb1, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpackhi_epi8(rgb2, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpackhi_epi8(rgb3, _mm_setzero_si128()); \\\n\ty1_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty1_16 = _mm_add_epi16(y1_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty1_16 = _mm_srli_epi16(y1_16, 8); \\\n\tcb2_16 = _mm_add_epi16(cb2_16, _mm_sub_epi16(b_16, y1_16)); \\\n\tcr2_16 = _mm_add_epi16(cr2_16, _mm_sub_epi16(r_16, y1_16)); \\\n\tr_16 = _mm_unpackhi_epi8(rgb4, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpackhi_epi8(rgb5, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpackhi_epi8(rgb6, _mm_setzero_si128()); \\\n\ty2_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty2_16 = _mm_add_epi16(y2_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty2_16 = _mm_srli_epi16(y2_16, 8); \\\n\tcb2_16 = _mm_add_epi16(cb2_16, _mm_sub_epi16(b_16, y2_16)); \\\n\tcr2_16 = _mm_add_epi16(cr2_16, _mm_sub_epi16(r_16, y2_16)); \\\n\t/* Rescale Y' to Y, pack it to 8bit values and save it */ \\\n\ty1_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y1_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\ty2_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y2_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\tY = _mm_packus_epi16(y1_16, y2_16); \\\n\tY = _mm_unpackhi_epi8(_mm_slli_si128(Y, 8), Y); \\\n\tSAVE_SI128((__m128i*)(y_ptr2+16), Y); \\\n\t/* Rescale Cb and Cr to their final range */ \\\n\tcb2_16 = _mm_add_epi16(_mm_srai_epi16(_mm_mullo_epi16(_mm_srai_epi16(cb2_16, 2), _mm_set1_epi16(param->cb_factor)), 8), _mm_set1_epi16(128)); \\\n\tcr2_16 = _mm_add_epi16(_mm_srai_epi16(_mm_mullo_epi16(_mm_srai_epi16(cr2_16, 2), _mm_set1_epi16(param->cr_factor)), 8), _mm_set1_epi16(128)); \\\n\t/* Pack and save Cb Cr */ \\\n\tcb = _mm_packus_epi16(cb1_16, cb2_16); \\\n\tcr = _mm_packus_epi16(cr1_16, cr2_16); \\\n\tSAVE_SI128((__m128i*)(u_ptr), cb); \\\n\tSAVE_SI128((__m128i*)(v_ptr), cr);\n\n\nvoid rgb24_yuv420_sse(uint32_t width, uint32_t height, \n\tconst uint8_t *RGB, uint32_t RGB_stride, \n\tuint8_t *Y, uint8_t *U, uint8_t *V, uint32_t Y_stride, uint32_t UV_stride, \n\tYCbCrType yuv_type)\n{\n\t#define LOAD_SI128 _mm_load_si128\n\t#define SAVE_SI128 _mm_stream_si128\n\tconst RGB2YUVParam *const param = &(RGB2YUV[yuv_type]);\n\t\n\tuint32_t x, y;\n\tfor(y=0; y<(height-1); y+=2)\n\t{\n\t\tconst uint8_t *rgb_ptr1=RGB+y*RGB_stride,\n\t\t\t*rgb_ptr2=RGB+(y+1)*RGB_stride;\n\t\t\n\t\tuint8_t *y_ptr1=Y+y*Y_stride,\n\t\t\t*y_ptr2=Y+(y+1)*Y_stride,\n\t\t\t*u_ptr=U+(y/2)*UV_stride,\n\t\t\t*v_ptr=V+(y/2)*UV_stride;\n\t\t\n\t\tfor(x=0; x<(width-31); x+=32)\n\t\t{\n\t\t\tRGB2YUV_32\n\t\t\t\n\t\t\trgb_ptr1+=96;\n\t\t\trgb_ptr2+=96;\n\t\t\ty_ptr1+=32;\n\t\t\ty_ptr2+=32;\n\t\t\tu_ptr+=16; \n\t\t\tv_ptr+=16;\n\t\t}\n\t}\n\t#undef LOAD_SI128\n\t#undef SAVE_SI128\n}\n\nvoid rgb24_yuv420_sseu(uint32_t width, uint32_t height, \n\tconst uint8_t *RGB, uint32_t RGB_stride, \n\tuint8_t *Y, uint8_t *U, uint8_t *V, uint32_t Y_stride, uint32_t UV_stride, \n\tYCbCrType yuv_type)\n{\n\t#define LOAD_SI128 _mm_loadu_si128\n\t#define SAVE_SI128 _mm_storeu_si128\n\tconst RGB2YUVParam *const param = &(RGB2YUV[yuv_type]);\n\t\n\tuint32_t x, y;\n\tfor(y=0; y<(height-1); y+=2)\n\t{\n\t\tconst uint8_t *rgb_ptr1=RGB+y*RGB_stride,\n\t\t\t*rgb_ptr2=RGB+(y+1)*RGB_stride;\n\t\t\n\t\tuint8_t *y_ptr1=Y+y*Y_stride,\n\t\t\t*y_ptr2=Y+(y+1)*Y_stride,\n\t\t\t*u_ptr=U+(y/2)*UV_stride,\n\t\t\t*v_ptr=V+(y/2)*UV_stride;\n\t\t\n\t\tfor(x=0; x<(width-31); x+=32)\n\t\t{\n\t\t\tRGB2YUV_32\n\t\t\t\n\t\t\trgb_ptr1+=96;\n\t\t\trgb_ptr2+=96;\n\t\t\ty_ptr1+=32;\n\t\t\ty_ptr2+=32;\n\t\t\tu_ptr+=16; \n\t\t\tv_ptr+=16;\n\t\t}\n\t}\n\t#undef LOAD_SI128\n\t#undef SAVE_SI128\n}\n#endif\n\n#ifdef __SSE2__\n\n#define UV2RGB_16(U,V,R1,G1,B1,R2,G2,B2) \\\n\tr_tmp = _mm_srai_epi16(_mm_mullo_epi16(V, _mm_set1_epi16(param->cr_factor)), 6); \\\n\tg_tmp = _mm_srai_epi16(_mm_add_epi16( \\\n\t\t_mm_mullo_epi16(U, _mm_set1_epi16(param->g_cb_factor)), \\\n\t\t_mm_mullo_epi16(V, _mm_set1_epi16(param->g_cr_factor))), 7); \\\n\tb_tmp = _mm_srai_epi16(_mm_mullo_epi16(U, _mm_set1_epi16(param->cb_factor)), 6); \\\n\tR1 = _mm_unpacklo_epi16(r_tmp, r_tmp); \\\n\tG1 = _mm_unpacklo_epi16(g_tmp, g_tmp); \\\n\tB1 = _mm_unpacklo_epi16(b_tmp, b_tmp); \\\n\tR2 = _mm_unpackhi_epi16(r_tmp, r_tmp); \\\n\tG2 = _mm_unpackhi_epi16(g_tmp, g_tmp); \\\n\tB2 = _mm_unpackhi_epi16(b_tmp, b_tmp); \\\n\n#define ADD_Y2RGB_16(Y1,Y2,R1,G1,B1,R2,G2,B2) \\\n\tY1 = _mm_srai_epi16(_mm_mullo_epi16(Y1, _mm_set1_epi16(param->y_factor)), 7); \\\n\tY2 = _mm_srai_epi16(_mm_mullo_epi16(Y2, _mm_set1_epi16(param->y_factor)), 7); \\\n\t\\\n\tR1 = _mm_add_epi16(Y1, R1); \\\n\tG1 = _mm_sub_epi16(Y1, G1); \\\n\tB1 = _mm_add_epi16(Y1, B1); \\\n\tR2 = _mm_add_epi16(Y2, R2); \\\n\tG2 = _mm_sub_epi16(Y2, G2); \\\n\tB2 = _mm_add_epi16(Y2, B2); \\\n\n#define PACK_RGB24_32_STEP(RS1, RS2, RS3, RS4, RS5, RS6, RD1, RD2, RD3, RD4, RD5, RD6) \\\nRD1 = _mm_packus_epi16(_mm_and_si128(RS1,_mm_set1_epi16(0xFF)), _mm_and_si128(RS2,_mm_set1_epi16(0xFF))); \\\nRD2 = _mm_packus_epi16(_mm_and_si128(RS3,_mm_set1_epi16(0xFF)), _mm_and_si128(RS4,_mm_set1_epi16(0xFF))); \\\nRD3 = _mm_packus_epi16(_mm_and_si128(RS5,_mm_set1_epi16(0xFF)), _mm_and_si128(RS6,_mm_set1_epi16(0xFF))); \\\nRD4 = _mm_packus_epi16(_mm_srli_epi16(RS1,8), _mm_srli_epi16(RS2,8)); \\\nRD5 = _mm_packus_epi16(_mm_srli_epi16(RS3,8), _mm_srli_epi16(RS4,8)); \\\nRD6 = _mm_packus_epi16(_mm_srli_epi16(RS5,8), _mm_srli_epi16(RS6,8)); \\\n\n#define PACK_RGB24_32(R1, R2, G1, G2, B1, B2, RGB1, RGB2, RGB3, RGB4, RGB5, RGB6) \\\nPACK_RGB24_32_STEP(R1, R2, G1, G2, B1, B2, RGB1, RGB2, RGB3, RGB4, RGB5, RGB6) \\\nPACK_RGB24_32_STEP(RGB1, RGB2, RGB3, RGB4, RGB5, RGB6, R1, R2, G1, G2, B1, B2) \\\nPACK_RGB24_32_STEP(R1, R2, G1, G2, B1, B2, RGB1, RGB2, RGB3, RGB4, RGB5, RGB6) \\\nPACK_RGB24_32_STEP(RGB1, RGB2, RGB3, RGB4, RGB5, RGB6, R1, R2, G1, G2, B1, B2) \\\nPACK_RGB24_32_STEP(R1, R2, G1, G2, B1, B2, RGB1, RGB2, RGB3, RGB4, RGB5, RGB6) \\\n\n\n#define YUV2RGB_32 \\\n\t__m128i r_tmp, g_tmp, b_tmp; \\\n\t__m128i r_16_1, g_16_1, b_16_1, r_16_2, g_16_2, b_16_2; \\\n\t__m128i r_uv_16_1, g_uv_16_1, b_uv_16_1, r_uv_16_2, g_uv_16_2, b_uv_16_2; \\\n\t__m128i y_16_1, y_16_2; \\\n\t\\\n\t__m128i u = LOAD_SI128((const __m128i*)(u_ptr)); \\\n\t__m128i v = LOAD_SI128((const __m128i*)(v_ptr)); \\\n\tu = _mm_add_epi8(u, _mm_set1_epi8(-128)); \\\n\tv = _mm_add_epi8(v, _mm_set1_epi8(-128)); \\\n\t\\\n\t/* process first 16 pixels of first line */\\\n\t__m128i u_16 = _mm_srai_epi16(_mm_unpacklo_epi8(u, u), 8); \\\n\t__m128i v_16 = _mm_srai_epi16(_mm_unpacklo_epi8(v, v), 8); \\\n\t\\\n\tUV2RGB_16(u_16, v_16, r_uv_16_1, g_uv_16_1, b_uv_16_1, r_uv_16_2, g_uv_16_2, b_uv_16_2) \\\n\tr_16_1=r_uv_16_1; g_16_1=g_uv_16_1; b_16_1=b_uv_16_1; \\\n\tr_16_2=r_uv_16_2; g_16_2=g_uv_16_2; b_16_2=b_uv_16_2; \\\n\t\\\n\t__m128i y = LOAD_SI128((const __m128i*)(y_ptr1)); \\\n\ty = _mm_sub_epi8(y, _mm_set1_epi8(param->y_offset)); \\\n\ty_16_1 = _mm_unpacklo_epi8(y, _mm_setzero_si128()); \\\n\ty_16_2 = _mm_unpackhi_epi8(y, _mm_setzero_si128()); \\\n\t\\\n\tADD_Y2RGB_16(y_16_1, y_16_2, r_16_1, g_16_1, b_16_1, r_16_2, g_16_2, b_16_2) \\\n\t\\\n\t__m128i r_8_11 = _mm_packus_epi16(r_16_1, r_16_2); \\\n\t__m128i g_8_11 = _mm_packus_epi16(g_16_1, g_16_2); \\\n\t__m128i b_8_11 = _mm_packus_epi16(b_16_1, b_16_2); \\\n\t\\\n\t/* process first 16 pixels of second line */\\\n\tr_16_1=r_uv_16_1; g_16_1=g_uv_16_1; b_16_1=b_uv_16_1; \\\n\tr_16_2=r_uv_16_2; g_16_2=g_uv_16_2; b_16_2=b_uv_16_2; \\\n\t\\\n\ty = LOAD_SI128((const __m128i*)(y_ptr2)); \\\n\ty = _mm_sub_epi8(y, _mm_set1_epi8(param->y_offset)); \\\n\ty_16_1 = _mm_unpacklo_epi8(y, _mm_setzero_si128()); \\\n\ty_16_2 = _mm_unpackhi_epi8(y, _mm_setzero_si128()); \\\n\t\\\n\tADD_Y2RGB_16(y_16_1, y_16_2, r_16_1, g_16_1, b_16_1, r_16_2, g_16_2, b_16_2) \\\n\t\\\n\t__m128i r_8_21 = _mm_packus_epi16(r_16_1, r_16_2); \\\n\t__m128i g_8_21 = _mm_packus_epi16(g_16_1, g_16_2); \\\n\t__m128i b_8_21 = _mm_packus_epi16(b_16_1, b_16_2); \\\n\t\\\n\t/* process last 16 pixels of first line */\\\n\tu_16 = _mm_srai_epi16(_mm_unpackhi_epi8(u, u), 8); \\\n\tv_16 = _mm_srai_epi16(_mm_unpackhi_epi8(v, v), 8); \\\n\t\\\n\tUV2RGB_16(u_16, v_16, r_uv_16_1, g_uv_16_1, b_uv_16_1, r_uv_16_2, g_uv_16_2, b_uv_16_2) \\\n\tr_16_1=r_uv_16_1; g_16_1=g_uv_16_1; b_16_1=b_uv_16_1; \\\n\tr_16_2=r_uv_16_2; g_16_2=g_uv_16_2; b_16_2=b_uv_16_2; \\\n\t\\\n\ty = LOAD_SI128((const __m128i*)(y_ptr1+16)); \\\n\ty = _mm_sub_epi8(y, _mm_set1_epi8(param->y_offset)); \\\n\ty_16_1 = _mm_unpacklo_epi8(y, _mm_setzero_si128()); \\\n\ty_16_2 = _mm_unpackhi_epi8(y, _mm_setzero_si128()); \\\n\t\\\n\tADD_Y2RGB_16(y_16_1, y_16_2, r_16_1, g_16_1, b_16_1, r_16_2, g_16_2, b_16_2) \\\n\t\\\n\t__m128i r_8_12 = _mm_packus_epi16(r_16_1, r_16_2); \\\n\t__m128i g_8_12 = _mm_packus_epi16(g_16_1, g_16_2); \\\n\t__m128i b_8_12 = _mm_packus_epi16(b_16_1, b_16_2); \\\n\t\\\n\t/* process last 16 pixels of second line */\\\n\tr_16_1=r_uv_16_1; g_16_1=g_uv_16_1; b_16_1=b_uv_16_1; \\\n\tr_16_2=r_uv_16_2; g_16_2=g_uv_16_2; b_16_2=b_uv_16_2; \\\n\t\\\n\ty = LOAD_SI128((const __m128i*)(y_ptr2+16)); \\\n\ty = _mm_sub_epi8(y, _mm_set1_epi8(param->y_offset)); \\\n\ty_16_1 = _mm_unpacklo_epi8(y, _mm_setzero_si128()); \\\n\ty_16_2 = _mm_unpackhi_epi8(y, _mm_setzero_si128()); \\\n\t\\\n\tADD_Y2RGB_16(y_16_1, y_16_2, r_16_1, g_16_1, b_16_1, r_16_2, g_16_2, b_16_2) \\\n\t\\\n\t__m128i r_8_22 = _mm_packus_epi16(r_16_1, r_16_2); \\\n\t__m128i g_8_22 = _mm_packus_epi16(g_16_1, g_16_2); \\\n\t__m128i b_8_22 = _mm_packus_epi16(b_16_1, b_16_2); \\\n\t\\\n\t__m128i rgb_1, rgb_2, rgb_3, rgb_4, rgb_5, rgb_6; \\\n\t\\\n\tPACK_RGB24_32(r_8_11, r_8_12, g_8_11, g_8_12, b_8_11, b_8_12, rgb_1, rgb_2, rgb_3, rgb_4, rgb_5, rgb_6) \\\n\tSAVE_SI128((__m128i*)(rgb_ptr1), rgb_1); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr1+16), rgb_2); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr1+32), rgb_3); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr1+48), rgb_4); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr1+64), rgb_5); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr1+80), rgb_6); \\\n\t\\\n\tPACK_RGB24_32(r_8_21, r_8_22, g_8_21, g_8_22, b_8_21, b_8_22, rgb_1, rgb_2, rgb_3, rgb_4, rgb_5, rgb_6) \\\n\tSAVE_SI128((__m128i*)(rgb_ptr2), rgb_1); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr2+16), rgb_2); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr2+32), rgb_3); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr2+48), rgb_4); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr2+64), rgb_5); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr2+80), rgb_6); \\\n\n\nvoid yuv420_rgb24_sse(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *Y, const uint8_t *U, const uint8_t *V, uint32_t Y_stride, uint32_t UV_stride, \n\tuint8_t *RGB, uint32_t RGB_stride, \n\tYCbCrType yuv_type)\n{\n\t#define LOAD_SI128 _mm_load_si128\n\t#define SAVE_SI128 _mm_stream_si128\n\tconst YUV2RGBParam *const param = &(YUV2RGB[yuv_type]);\n\t\n\tuint32_t x, y;\n\tfor(y=0; y<(height-1); y+=2)\n\t{\n\t\tconst uint8_t *y_ptr1=Y+y*Y_stride,\n\t\t\t*y_ptr2=Y+(y+1)*Y_stride,\n\t\t\t*u_ptr=U+(y/2)*UV_stride,\n\t\t\t*v_ptr=V+(y/2)*UV_stride;\n\t\t\n\t\tuint8_t *rgb_ptr1=RGB+y*RGB_stride,\n\t\t\t*rgb_ptr2=RGB+(y+1)*RGB_stride;\n\t\t\n\t\tfor(x=0; x<(width-31); x+=32)\n\t\t{\n\t\t\tYUV2RGB_32\n\t\t\t\n\t\t\ty_ptr1+=32;\n\t\t\ty_ptr2+=32;\n\t\t\tu_ptr+=16; \n\t\t\tv_ptr+=16;\n\t\t\trgb_ptr1+=96;\n\t\t\trgb_ptr2+=96;\n\t\t}\n\t}\n\t#undef LOAD_SI128\n\t#undef SAVE_SI128\n}\n\nvoid yuv420_rgb24_sseu(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *Y, const uint8_t *U, const uint8_t *V, uint32_t Y_stride, uint32_t UV_stride, \n\tuint8_t *RGB, uint32_t RGB_stride, \n\tYCbCrType yuv_type)\n{\n\t#define LOAD_SI128 _mm_loadu_si128\n\t#define SAVE_SI128 _mm_storeu_si128\n\tconst YUV2RGBParam *const param = &(YUV2RGB[yuv_type]);\n\t\n\tuint32_t x, y;\n\tfor(y=0; y<(height-1); y+=2)\n\t{\n\t\tconst uint8_t *y_ptr1=Y+y*Y_stride,\n\t\t\t*y_ptr2=Y+(y+1)*Y_stride,\n\t\t\t*u_ptr=U+(y/2)*UV_stride,\n\t\t\t*v_ptr=V+(y/2)*UV_stride;\n\t\t\n\t\tuint8_t *rgb_ptr1=RGB+y*RGB_stride,\n\t\t\t*rgb_ptr2=RGB+(y+1)*RGB_stride;\n\t\t\n\t\tfor(x=0; x<(width-31); x+=32)\n\t\t{\n\t\t\tYUV2RGB_32\n\t\t\t\n\t\t\ty_ptr1+=32;\n\t\t\ty_ptr2+=32;\n\t\t\tu_ptr+=16; \n\t\t\tv_ptr+=16;\n\t\t\trgb_ptr1+=96;\n\t\t\trgb_ptr2+=96;\n\t\t}\n\t}\n\t#undef LOAD_SI128\n\t#undef SAVE_SI128\n}\n\n\n#endif //__SSE2__\n"
  },
  {
    "path": "PinBox/PinBox/tools/citra/README.md",
    "content": "**BEFORE FILING AN ISSUE, READ THE RELEVANT SECTION IN THE [CONTRIBUTING](https://github.com/citra-emu/citra/blob/master/CONTRIBUTING.md#reporting-issues) FILE!!!**\n\nCitra\n==============\n[![Travis CI Build Status](https://travis-ci.org/citra-emu/citra.svg?branch=master)](https://travis-ci.org/citra-emu/citra)\n[![AppVeyor CI Build Status](https://ci.appveyor.com/api/projects/status/sdf1o4kh3g1e68m9?svg=true)](https://ci.appveyor.com/project/bunnei/citra)\n\nCitra is an experimental open-source Nintendo 3DS emulator/debugger written in C++. It is written with portability in mind, with builds actively maintained for Windows, Linux and macOS.\n\nCitra emulates a subset of 3DS hardware and therefore is useful for running/debugging homebrew applications, and it is also able to run many commercial games! Some of these do not run at a playable state, but we are working every day to advance the project forward. (Playable here means compatibility of at least \"Okay\" on our [game compatibility list](https://citra-emu.org/game).)\n\nCitra is licensed under the GPLv2 (or any later version). Refer to the license.txt file included. Please read the [FAQ](https://citra-emu.org/wiki/faq/) before getting started with the project.\n\nCheck out our [website](https://citra-emu.org/)!\n\nFor development discussion, please join us at #citra-dev on freenode.\n\n### Development\n\nMost of the development happens on GitHub. It's also where [our central repository](https://github.com/citra-emu/citra) is hosted.\n\nIf you want to contribute please take a look at the [Contributor's Guide](CONTRIBUTING.md) and [Developer Information](https://github.com/citra-emu/citra/wiki/Developer-Information). You should as well contact any of the developers in the forum in order to know about the current state of the emulator because the [TODO list](https://docs.google.com/document/d/1SWIop0uBI9IW8VGg97TAtoT_CHNoP42FzYmvG1F4QDA) isn't maintained anymore.\n\nIf you want to contribute to the user interface translation, please checkout [citra project on transifex](https://www.transifex.com/citra/citra). We centralize the translation work there, and periodically upstream translation.\n\n### Building\n\n* __Windows__: [Windows Build](https://github.com/citra-emu/citra/wiki/Building-For-Windows)\n* __Linux__: [Linux Build](https://github.com/citra-emu/citra/wiki/Building-For-Linux)\n* __macOS__: [macOS Build](https://github.com/citra-emu/citra/wiki/Building-for-macOS)\n\n\n### Support\nWe happily accept monetary donations or donated games and hardware. Please see our [donations page](https://citra-emu.org/donate/) for more information on how you can contribute to Citra. Any donations received will go towards things like:\n* 3DS consoles for developers to explore the hardware\n* 3DS games for testing\n* Any equipment required for homebrew\n* Infrastructure setup\n* Eventually 3D displays to get proper 3D output working\n\nWe also more than gladly accept used 3DS consoles, preferably ones with firmware 4.5 or lower! If you would like to give yours away, don't hesitate to join our IRC channel #citra on [Freenode](http://webchat.freenode.net/?channels=citra) and talk to neobrain or bunnei. Mind you, IRC is slow-paced, so it might be a while until people reply. If you're in a hurry you can just leave contact details in the channel or via private message and we'll get back to you.\n"
  },
  {
    "path": "PinBox/PinBox/tools/citra/license.txt",
    "content": "\t\t    GNU GENERAL PUBLIC LICENSE\n\t\t       Version 2, June 1991\n\n Copyright (C) 1989, 1991 Free Software Foundation, Inc.,\n 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n\t\t\t    Preamble\n\n  The licenses for most software are designed to take away your\nfreedom to share and change it.  By contrast, the GNU General Public\nLicense is intended to guarantee your freedom to share and change free\nsoftware--to make sure the software is free for all its users.  This\nGeneral Public License applies to most of the Free Software\nFoundation's software and to any other program whose authors commit to\nusing it.  (Some other Free Software Foundation software is covered by\nthe GNU Lesser General Public License instead.)  You can apply it to\nyour programs, too.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthis service if you wish), that you receive source code or can get it\nif you want it, that you can change the software or use pieces of it\nin new free programs; and that you know you can do these things.\n\n  To protect your rights, we need to make restrictions that forbid\nanyone to deny you these rights or to ask you to surrender the rights.\nThese restrictions translate to certain responsibilities for you if you\ndistribute copies of the software, or if you modify it.\n\n  For example, if you distribute copies of such a program, whether\ngratis or for a fee, you must give the recipients all the rights that\nyou have.  You must make sure that they, too, receive or can get the\nsource code.  And you must show them these terms so they know their\nrights.\n\n  We protect your rights with two steps: (1) copyright the software, and\n(2) offer you this license which gives you legal permission to copy,\ndistribute and/or modify the software.\n\n  Also, for each author's protection and ours, we want to make certain\nthat everyone understands that there is no warranty for this free\nsoftware.  If the software is modified by someone else and passed on, we\nwant its recipients to know that what they have is not the original, so\nthat any problems introduced by others will not reflect on the original\nauthors' reputations.\n\n  Finally, any free program is threatened constantly by software\npatents.  We wish to avoid the danger that redistributors of a free\nprogram will individually obtain patent licenses, in effect making the\nprogram proprietary.  To prevent this, we have made it clear that any\npatent must be licensed for everyone's free use or not licensed at all.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n\t\t    GNU GENERAL PUBLIC LICENSE\n   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION\n\n  0. This License applies to any program or other work which contains\na notice placed by the copyright holder saying it may be distributed\nunder the terms of this General Public License.  The \"Program\", below,\nrefers to any such program or work, and a \"work based on the Program\"\nmeans either the Program or any derivative work under copyright law:\nthat is to say, a work containing the Program or a portion of it,\neither verbatim or with modifications and/or translated into another\nlanguage.  (Hereinafter, translation is included without limitation in\nthe term \"modification\".)  Each licensee is addressed as \"you\".\n\nActivities other than copying, distribution and modification are not\ncovered by this License; they are outside its scope.  The act of\nrunning the Program is not restricted, and the output from the Program\nis covered only if its contents constitute a work based on the\nProgram (independent of having been made by running the Program).\nWhether that is true depends on what the Program does.\n\n  1. You may copy and distribute verbatim copies of the Program's\nsource code as you receive it, in any medium, provided that you\nconspicuously and appropriately publish on each copy an appropriate\ncopyright notice and disclaimer of warranty; keep intact all the\nnotices that refer to this License and to the absence of any warranty;\nand give any other recipients of the Program a copy of this License\nalong with the Program.\n\nYou may charge a fee for the physical act of transferring a copy, and\nyou may at your option offer warranty protection in exchange for a fee.\n\n  2. You may modify your copy or copies of the Program or any portion\nof it, thus forming a work based on the Program, and copy and\ndistribute such modifications or work under the terms of Section 1\nabove, provided that you also meet all of these conditions:\n\n    a) You must cause the modified files to carry prominent notices\n    stating that you changed the files and the date of any change.\n\n    b) You must cause any work that you distribute or publish, that in\n    whole or in part contains or is derived from the Program or any\n    part thereof, to be licensed as a whole at no charge to all third\n    parties under the terms of this License.\n\n    c) If the modified program normally reads commands interactively\n    when run, you must cause it, when started running for such\n    interactive use in the most ordinary way, to print or display an\n    announcement including an appropriate copyright notice and a\n    notice that there is no warranty (or else, saying that you provide\n    a warranty) and that users may redistribute the program under\n    these conditions, and telling the user how to view a copy of this\n    License.  (Exception: if the Program itself is interactive but\n    does not normally print such an announcement, your work based on\n    the Program is not required to print an announcement.)\n\nThese requirements apply to the modified work as a whole.  If\nidentifiable sections of that work are not derived from the Program,\nand can be reasonably considered independent and separate works in\nthemselves, then this License, and its terms, do not apply to those\nsections when you distribute them as separate works.  But when you\ndistribute the same sections as part of a whole which is a work based\non the Program, the distribution of the whole must be on the terms of\nthis License, whose permissions for other licensees extend to the\nentire whole, and thus to each and every part regardless of who wrote it.\n\nThus, it is not the intent of this section to claim rights or contest\nyour rights to work written entirely by you; rather, the intent is to\nexercise the right to control the distribution of derivative or\ncollective works based on the Program.\n\nIn addition, mere aggregation of another work not based on the Program\nwith the Program (or with a work based on the Program) on a volume of\na storage or distribution medium does not bring the other work under\nthe scope of this License.\n\n  3. You may copy and distribute the Program (or a work based on it,\nunder Section 2) in object code or executable form under the terms of\nSections 1 and 2 above provided that you also do one of the following:\n\n    a) Accompany it with the complete corresponding machine-readable\n    source code, which must be distributed under the terms of Sections\n    1 and 2 above on a medium customarily used for software interchange; or,\n\n    b) Accompany it with a written offer, valid for at least three\n    years, to give any third party, for a charge no more than your\n    cost of physically performing source distribution, a complete\n    machine-readable copy of the corresponding source code, to be\n    distributed under the terms of Sections 1 and 2 above on a medium\n    customarily used for software interchange; or,\n\n    c) Accompany it with the information you received as to the offer\n    to distribute corresponding source code.  (This alternative is\n    allowed only for noncommercial distribution and only if you\n    received the program in object code or executable form with such\n    an offer, in accord with Subsection b above.)\n\nThe source code for a work means the preferred form of the work for\nmaking modifications to it.  For an executable work, complete source\ncode means all the source code for all modules it contains, plus any\nassociated interface definition files, plus the scripts used to\ncontrol compilation and installation of the executable.  However, as a\nspecial exception, the source code distributed need not include\nanything that is normally distributed (in either source or binary\nform) with the major components (compiler, kernel, and so on) of the\noperating system on which the executable runs, unless that component\nitself accompanies the executable.\n\nIf distribution of executable or object code is made by offering\naccess to copy from a designated place, then offering equivalent\naccess to copy the source code from the same place counts as\ndistribution of the source code, even though third parties are not\ncompelled to copy the source along with the object code.\n\n  4. You may not copy, modify, sublicense, or distribute the Program\nexcept as expressly provided under this License.  Any attempt\notherwise to copy, modify, sublicense or distribute the Program is\nvoid, and will automatically terminate your rights under this License.\nHowever, parties who have received copies, or rights, from you under\nthis License will not have their licenses terminated so long as such\nparties remain in full compliance.\n\n  5. You are not required to accept this License, since you have not\nsigned it.  However, nothing else grants you permission to modify or\ndistribute the Program or its derivative works.  These actions are\nprohibited by law if you do not accept this License.  Therefore, by\nmodifying or distributing the Program (or any work based on the\nProgram), you indicate your acceptance of this License to do so, and\nall its terms and conditions for copying, distributing or modifying\nthe Program or works based on it.\n\n  6. Each time you redistribute the Program (or any work based on the\nProgram), the recipient automatically receives a license from the\noriginal licensor to copy, distribute or modify the Program subject to\nthese terms and conditions.  You may not impose any further\nrestrictions on the recipients' exercise of the rights granted herein.\nYou are not responsible for enforcing compliance by third parties to\nthis License.\n\n  7. If, as a consequence of a court judgment or allegation of patent\ninfringement or for any other reason (not limited to patent issues),\nconditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot\ndistribute so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you\nmay not distribute the Program at all.  For example, if a patent\nlicense would not permit royalty-free redistribution of the Program by\nall those who receive copies directly or indirectly through you, then\nthe only way you could satisfy both it and this License would be to\nrefrain entirely from distribution of the Program.\n\nIf any portion of this section is held invalid or unenforceable under\nany particular circumstance, the balance of the section is intended to\napply and the section as a whole is intended to apply in other\ncircumstances.\n\nIt is not the purpose of this section to induce you to infringe any\npatents or other property right claims or to contest validity of any\nsuch claims; this section has the sole purpose of protecting the\nintegrity of the free software distribution system, which is\nimplemented by public license practices.  Many people have made\ngenerous contributions to the wide range of software distributed\nthrough that system in reliance on consistent application of that\nsystem; it is up to the author/donor to decide if he or she is willing\nto distribute software through any other system and a licensee cannot\nimpose that choice.\n\nThis section is intended to make thoroughly clear what is believed to\nbe a consequence of the rest of this License.\n\n  8. If the distribution and/or use of the Program is restricted in\ncertain countries either by patents or by copyrighted interfaces, the\noriginal copyright holder who places the Program under this License\nmay add an explicit geographical distribution limitation excluding\nthose countries, so that distribution is permitted only in or among\ncountries not thus excluded.  In such case, this License incorporates\nthe limitation as if written in the body of this License.\n\n  9. The Free Software Foundation may publish revised and/or new versions\nof the General Public License from time to time.  Such new versions will\nbe similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\nEach version is given a distinguishing version number.  If the Program\nspecifies a version number of this License which applies to it and \"any\nlater version\", you have the option of following the terms and conditions\neither of that version or of any later version published by the Free\nSoftware Foundation.  If the Program does not specify a version number of\nthis License, you may choose any version ever published by the Free Software\nFoundation.\n\n  10. If you wish to incorporate parts of the Program into other free\nprograms whose distribution conditions are different, write to the author\nto ask for permission.  For software which is copyrighted by the Free\nSoftware Foundation, write to the Free Software Foundation; we sometimes\nmake exceptions for this.  Our decision will be guided by the two goals\nof preserving the free status of all derivatives of our free software and\nof promoting the sharing and reuse of software generally.\n\n\t\t\t    NO WARRANTY\n\n  11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY\nFOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN\nOTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES\nPROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED\nOR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF\nMERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS\nTO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE\nPROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,\nREPAIR OR CORRECTION.\n\n  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR\nREDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,\nINCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING\nOUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED\nTO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY\nYOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER\nPROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE\nPOSSIBILITY OF SUCH DAMAGES.\n\n\t\t     END OF TERMS AND CONDITIONS\n\n\t    How to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nconvey the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n    <one line to give the program's name and a brief idea of what it does.>\n    Copyright (C) <year>  <name of author>\n\n    This program is free software; you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation; either version 2 of the License, or\n    (at your option) any later version.\n\n    This program is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License along\n    with this program; if not, write to the Free Software Foundation, Inc.,\n    51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nAlso add information on how to contact you by electronic and paper mail.\n\nIf the program is interactive, make it output a short notice like this\nwhen it starts in an interactive mode:\n\n    Gnomovision version 69, Copyright (C) year name of author\n    Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.\n    This is free software, and you are welcome to redistribute it\n    under certain conditions; type `show c' for details.\n\nThe hypothetical commands `show w' and `show c' should show the appropriate\nparts of the General Public License.  Of course, the commands you use may\nbe called something other than `show w' and `show c'; they could even be\nmouse-clicks or menu items--whatever suits your program.\n\nYou should also get your employer (if you work as a programmer) or your\nschool, if any, to sign a \"copyright disclaimer\" for the program, if\nnecessary.  Here is a sample; alter the names:\n\n  Yoyodyne, Inc., hereby disclaims all copyright interest in the program\n  `Gnomovision' (which makes passes at compilers) written by James Hacker.\n\n  <signature of Ty Coon>, 1 April 1989\n  Ty Coon, President of Vice\n\nThis General Public License does not permit incorporating your program into\nproprietary programs.  If your program is a subroutine library, you may\nconsider it more useful to permit linking proprietary applications with the\nlibrary.  If this is what you want to do, use the GNU Lesser General\nPublic License instead of this License.\n\n\nThe icons used in this project have the following licenses:\n\nIcon Name         | License       | Origin/Author\n---               | ---           | ---\nchecked.png       | Free for non-commercial use\nconnected.png     | CC BY-ND 3.0  | https://icons8.com\ndisconnected.png  | CC BY-ND 3.0  | https://icons8.com\nfailed.png        | Free for non-commercial use\nlock.png          | CC BY-ND 3.0  | https://icons8.com\nplus_folder.png   | CC BY-ND 3.0  | https://icons8.com\nbad_folder.png    | CC BY-ND 3.0  | https://icons8.com\nchip.png          | CC BY-ND 3.0  | https://icons8.com\nfolder.png        | CC BY-ND 3.0  | https://icons8.com\nplus.png          | CC BY-ND 3.0  | https://icons8.com\nsd_card.png       | CC BY-ND 3.0  | https://icons8.com\n"
  },
  {
    "path": "PinBox/PinBox.sln",
    "content": "﻿\r\nMicrosoft Visual Studio Solution File, Format Version 12.00\r\n# Visual Studio 14\r\nVisualStudioVersion = 14.0.25420.1\r\nMinimumVisualStudioVersion = 10.0.40219.1\r\nProject(\"{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}\") = \"PinBox\", \"PinBox\\PinBox.vcxproj\", \"{7C85BF63-9A1F-43F0-9560-6E51BA537C01}\"\r\nEndProject\r\nGlobal\r\n\tGlobalSection(SolutionConfigurationPlatforms) = preSolution\r\n\t\tCitra|x64 = Citra|x64\r\n\t\tCitra|x86 = Citra|x86\r\n\t\tCitra-QT|x64 = Citra-QT|x64\r\n\t\tCitra-QT|x86 = Citra-QT|x86\r\n\t\tNetlink|x64 = Netlink|x64\r\n\t\tNetlink|x86 = Netlink|x86\r\n\tEndGlobalSection\r\n\tGlobalSection(ProjectConfigurationPlatforms) = postSolution\r\n\t\t{7C85BF63-9A1F-43F0-9560-6E51BA537C01}.Citra|x64.ActiveCfg = Citra|x64\r\n\t\t{7C85BF63-9A1F-43F0-9560-6E51BA537C01}.Citra|x64.Build.0 = Citra|x64\r\n\t\t{7C85BF63-9A1F-43F0-9560-6E51BA537C01}.Citra|x86.ActiveCfg = Citra|Win32\r\n\t\t{7C85BF63-9A1F-43F0-9560-6E51BA537C01}.Citra|x86.Build.0 = Citra|Win32\r\n\t\t{7C85BF63-9A1F-43F0-9560-6E51BA537C01}.Citra-QT|x64.ActiveCfg = Citra-QT|x64\r\n\t\t{7C85BF63-9A1F-43F0-9560-6E51BA537C01}.Citra-QT|x64.Build.0 = Citra-QT|x64\r\n\t\t{7C85BF63-9A1F-43F0-9560-6E51BA537C01}.Citra-QT|x86.ActiveCfg = Citra-QT|Win32\r\n\t\t{7C85BF63-9A1F-43F0-9560-6E51BA537C01}.Citra-QT|x86.Build.0 = Citra-QT|Win32\r\n\t\t{7C85BF63-9A1F-43F0-9560-6E51BA537C01}.Netlink|x64.ActiveCfg = Netlink|x64\r\n\t\t{7C85BF63-9A1F-43F0-9560-6E51BA537C01}.Netlink|x64.Build.0 = Netlink|x64\r\n\t\t{7C85BF63-9A1F-43F0-9560-6E51BA537C01}.Netlink|x86.ActiveCfg = Netlink|Win32\r\n\t\t{7C85BF63-9A1F-43F0-9560-6E51BA537C01}.Netlink|x86.Build.0 = Netlink|Win32\r\n\tEndGlobalSection\r\n\tGlobalSection(SolutionProperties) = preSolution\r\n\t\tHideSolutionNode = FALSE\r\n\tEndGlobalSection\r\nEndGlobal\r\n"
  },
  {
    "path": "PinBoxServer/Check-Pinbox-Firewall-Rules.bat",
    "content": "@echo off\ntitle = Pinbox Firewall Setup\necho Checking PinBox Server executable...\nIF NOT EXIST %~dp0PinBoxServer.exe GOTO pinnotdetected\necho PinBox Executable found!\necho Elevating privilegies...\nif not \"%1\"==\"am_admin\" (powershell start -verb runas '%0' am_admin & exit /b)\n\necho Admin detected.\necho Adding rules...\nrem REMEMBER TO CHANGE THE PROGRAM NAME IF THE BINARY FILE NAME CHANGES\nnetsh advfirewall firewall add rule name=\"PinboxOut\" dir=out action=allow  profile=any program= \"%~dp0PinBoxServer.exe\" enable=yes\nnetsh advfirewall firewall add rule name=\"PinBoxIn\" dir=in action=allow  profile=any program= \"%~dp0PinBoxServer.exe\" enable=yes\necho Done\npause\nexit\n\n:pinnotdetected\necho PinBoxServer.exe doesn't exists. Remember to run this script in THE SAME FOLDER AS THE EXECUTABLE.\npause exit"
  },
  {
    "path": "PinBoxServer/PinBoxServer/AudioStreamSession.cpp",
    "content": "#include \"stdafx.h\"\r\n#include \"AudioStreamSession.h\"\r\n#include <iostream>\r\n#include <cassert>\r\n#include \"const.h\"\r\n\r\n#define AUDIO_BUFFER_SIZE 0x500000 // 5 Mb\r\n\r\nnamespace\r\n{\r\n\tvoid createNew(void* arg) {\r\n\t\tstatic_cast<AudioStreamSession*>(arg)->loopbackThread();\r\n\t}\r\n}\r\n\r\nvoid AudioStreamSession::useDefaultDevice()\r\n{\r\n\tHRESULT hr = S_OK;\r\n\tIMMDeviceEnumerator *pMMDeviceEnumerator;\r\n\t//------------------------------------------------\r\n\t// activate a device enumerator\r\n\thr = CoCreateInstance(\r\n\t\t__uuidof(MMDeviceEnumerator), NULL, CLSCTX_ALL,\r\n\t\t__uuidof(IMMDeviceEnumerator),\r\n\t\t(void**)&pMMDeviceEnumerator\r\n\t);\r\n\tif (FAILED(hr)) {\r\n\t\tprintf(\"CoCreateInstance(IMMDeviceEnumerator) failed: hr = 0x%08x\", hr);\r\n\t\treturn;\r\n\t}\r\n\t//------------------------------------------------\r\n\t// get the default render endpoint\r\n\thr = pMMDeviceEnumerator->GetDefaultAudioEndpoint(eRender, eConsole, &_pMMDevice);\r\n\tif (FAILED(hr)) {\r\n\t\tprintf(\"IMMDeviceEnumerator::GetDefaultAudioEndpoint failed: hr = 0x%08x\", hr);\r\n\t}\r\n\tpMMDeviceEnumerator->Release();\r\n}\r\n\r\nvoid AudioStreamSession::StartAudioStream()\r\n{\r\n\tCoInitialize(NULL);\r\n\tthis->useDefaultDevice();\r\n\t//-----------------------------------------------------\r\n\t// create a \"stop capturing now\" event\r\n\t_stopEvent = CreateEvent(NULL, FALSE, FALSE, NULL);\r\n\tif (NULL == _stopEvent) {\r\n\t\tprintf(\"CreateEvent failed: last error is %u\", GetLastError());\r\n\t\treturn;\r\n\t}\r\n\t//-----------------------------------------------------\r\n\t// init tmp buffer\r\n\taudioBuffer = (u8*)malloc(AUDIO_BUFFER_SIZE);\r\n\taudioBufferSize = 0;\r\n\taudioFrames = 0;\r\n\t_mutex = new std::mutex();\r\n\t//-----------------------------------------------------\r\n\t// start loopback record thread\r\n\t_thread = std::thread(createNew, static_cast<AudioStreamSession*>(this));\r\n}\r\n\r\n\r\nvoid AudioStreamSession::StopStreaming()\r\n{\r\n\tSetEvent(_stopEvent);\r\n}\r\n\r\nvoid AudioStreamSession::ReadFromBuffer(u8* outBuf, u32 readSize)\r\n{\r\n\tif (!outBuf) return;\r\n\tif (readSize > audioBufferSize) return;\r\n\t_mutex->lock();\r\n\t// read memory to buf\r\n\tmemcpy(outBuf, audioBuffer, readSize);\r\n\r\n\t// update pointer cursor\r\n\taudioBufferSize -= readSize;\r\n\t// shift memory back to place\r\n\tmemcpy(audioBuffer, audioBuffer + readSize, audioBufferSize);\r\n\t_mutex->unlock();\r\n}\r\n\r\nvoid AudioStreamSession::ResetStorageBuffer()\r\n{\r\n\t_mutex->lock();\r\n\taudioBufferSize = 0;\r\n\t_mutex->unlock();\r\n}\r\n\r\n\r\nvoid AudioStreamSession::Pause()\r\n{\r\n\tif (_isPaused) return;\r\n\t_isPaused = true;\r\n\tHRESULT hr = S_OK;\r\n\thr = _pAudioClient->Stop();\r\n\tif (FAILED(hr)) {\r\n\t\tprintf(\"IAudioClient::Stop failed: hr = 0x%08x\\n\", hr);\r\n\t\treturn;\r\n\t}\r\n}\r\n\r\nvoid AudioStreamSession::Resume()\r\n{\r\n\tif (!_isPaused) return;\r\n\t_isPaused = false;\r\n\tHRESULT hr = S_OK;\r\n\thr = _pAudioClient->Start();\r\n\tif (FAILED(hr)) {\r\n\t\tprintf(\"IAudioClient::Start failed: hr = 0x%08x\\n\", hr);\r\n\t\treturn;\r\n\t}\r\n}\r\n\r\nvoid AudioStreamSession::loopbackThread()\r\n{\r\n\tHRESULT hr = S_OK;\r\n\thr = _pMMDevice->Activate(__uuidof(IAudioClient), CLSCTX_ALL, NULL, (void**)&_pAudioClient);\r\n\tif (FAILED(hr)) {\r\n\t\tprintf(\"IMMDevice::Activate(IAudioClient) failed: hr = 0x%08x\\n\", hr);\r\n\t\treturn;\r\n\t}\r\n\r\n\t// get the default device periodicity\r\n\tREFERENCE_TIME hnsDefaultDevicePeriod;\r\n\thr = _pAudioClient->GetDevicePeriod(&hnsDefaultDevicePeriod, NULL);\r\n\tif (FAILED(hr)) {\r\n\t\tprintf(\"IAudioClient::GetDevicePeriod failed: hr = 0x%08x\\n\", hr);\r\n\t\treturn;\r\n\t}\r\n\r\n\t// get the default device format\r\n\tWAVEFORMATEX *pwfx;\r\n\thr = _pAudioClient->GetMixFormat(&pwfx);\r\n\tif (FAILED(hr)) {\r\n\t\tprintf(\"IAudioClient::GetMixFormat failed: hr = 0x%08x\\n\", hr);\r\n\t\treturn;\r\n\t}\r\n\r\n\t//pwfx->nSamplesPerSec = 22050;\r\n\r\n\tswitch (pwfx->wFormatTag) {\r\n\tcase WAVE_FORMAT_IEEE_FLOAT:\r\n\t\tpwfx->wFormatTag = WAVE_FORMAT_PCM;\r\n\t\tpwfx->wBitsPerSample = 16;\r\n\t\tpwfx->nBlockAlign = pwfx->nChannels * pwfx->wBitsPerSample / 8;\r\n\t\tpwfx->nAvgBytesPerSec = pwfx->nBlockAlign * pwfx->nSamplesPerSec;\r\n\t\tbreak;\r\n\r\n\tcase WAVE_FORMAT_EXTENSIBLE:\r\n\t\t// naked scope for case-local variable\r\n\t\tPWAVEFORMATEXTENSIBLE pEx = reinterpret_cast<PWAVEFORMATEXTENSIBLE>(pwfx);\r\n\t\tif (IsEqualGUID(KSDATAFORMAT_SUBTYPE_IEEE_FLOAT, pEx->SubFormat)) {\r\n\t\t\tpEx->SubFormat = KSDATAFORMAT_SUBTYPE_PCM;\r\n\t\t\tpEx->Samples.wValidBitsPerSample = 16;\r\n\t\t\tpwfx->wBitsPerSample = 16; // 2 bytes per sample\r\n\t\t\tpwfx->nBlockAlign = pwfx->nChannels * pwfx->wBitsPerSample / 8;\r\n\t\t\tpwfx->nAvgBytesPerSec = pwfx->nBlockAlign * pwfx->nSamplesPerSec;\r\n\t\t}\r\n\t\telse {\r\n\t\t\tprintf(\"Don't know how to coerce mix format to int-16\\n\");\r\n\t\t\treturn;\r\n\t\t}\r\n\t\tbreak;\r\n\r\n\t}\r\n\r\n\tpwfx->cbSize = sizeof(WAVEFORMATEXTENSIBLE) - sizeof(WAVEFORMATEX);\r\n\r\n\t//pwfx->cbSize = 0;\r\n\t// create a periodic waitable timer\r\n\tHANDLE hWakeUp = CreateWaitableTimer(NULL, FALSE, NULL);\r\n\tUINT32 nBlockAlign = pwfx->nBlockAlign;\r\n\r\n\t// call IAudioClient::Initialize\r\n\t// note that AUDCLNT_STREAMFLAGS_LOOPBACK and AUDCLNT_STREAMFLAGS_EVENTCALLBACK\r\n\t// do not work together...\r\n\t// the \"data ready\" event never gets set\r\n\t// so we're going to do a timer-driven loop\r\n\thr = _pAudioClient->Initialize(\r\n\t\tAUDCLNT_SHAREMODE_SHARED,  // try this ? AUDCLNT_SHAREMODE_EXCLUSIVE\r\n\t\tAUDCLNT_STREAMFLAGS_LOOPBACK,\r\n\t\t0, 0, pwfx, 0\r\n\t);\r\n\tif (FAILED(hr)) {\r\n\t\tprintf(\">>>>> Error: IAudioClient::Initialize failed: hr = 0x%08x\\n\", hr);\r\n\t\treturn;\r\n\t}\r\n\r\n\tsampleRate = (u32)pwfx->nSamplesPerSec;\r\n\r\n\t// activate an IAudioCaptureClient\r\n\tIAudioCaptureClient *pAudioCaptureClient;\r\n\thr = _pAudioClient->GetService( __uuidof(IAudioCaptureClient), (void**)&pAudioCaptureClient );\r\n\r\n\t// set the waitable timer\r\n\tLARGE_INTEGER liFirstFire;\r\n\tliFirstFire.QuadPart = -hnsDefaultDevicePeriod / 2; // negative means relative time\r\n\tLONG lTimeBetweenFires = (LONG)hnsDefaultDevicePeriod / 2 / (10 * 1000); // convert to milliseconds\r\n\tBOOL bOK = SetWaitableTimer(\r\n\t\thWakeUp,\r\n\t\t&liFirstFire,\r\n\t\tlTimeBetweenFires,\r\n\t\tNULL, NULL, FALSE\r\n\t);\r\n\r\n\t// call IAudioClient::Start\r\n\thr = _pAudioClient->Start();\r\n\t\r\n\tif (FAILED(hr)) {\r\n\t\tprintf(\"IAudioClient::Start failed: hr = 0x%08x\\n\", hr);\r\n\t\treturn;\r\n\t}\r\n\r\n\t// loopback capture loop\r\n\tHANDLE waitArray[2] = { _stopEvent, hWakeUp };\r\n\tDWORD dwWaitResult;\r\n\r\n\r\n\tbool bDone = false;\r\n\tfor (UINT32 nPasses = 0; !bDone; nPasses++) {\r\n\r\n\t\t// drain data while it is available\r\n\t\tUINT32 nNextPacketSize;\r\n\t\tfor (hr = pAudioCaptureClient->GetNextPacketSize(&nNextPacketSize); SUCCEEDED(hr) && nNextPacketSize > 0; hr = pAudioCaptureClient->GetNextPacketSize(&nNextPacketSize))\r\n\t\t{\r\n\t\t\t// get the captured data\r\n\t\t\tBYTE *pData;\r\n\t\t\tUINT32 nNumFramesToRead;\r\n\t\t\tDWORD dwFlags;\r\n\r\n\t\t\thr = pAudioCaptureClient->GetBuffer(\r\n\t\t\t\t&pData,\r\n\t\t\t\t&nNumFramesToRead,\r\n\t\t\t\t&dwFlags,\r\n\t\t\t\tNULL,\r\n\t\t\t\tNULL\r\n\t\t\t);\r\n\r\n\r\n\t\t\tif(dwFlags & AUDCLNT_BUFFERFLAGS_SILENT)\r\n\t\t\t{\r\n\t\t\t\tpData = NULL;\r\n\t\t\t\t//TODO: send data to write as silent ?\r\n\t\t\t\tcontinue;\r\n\t\t\t}\r\n\r\n\t\t\tLONG lBytesToWrite = nNumFramesToRead * nBlockAlign;\r\n\t\t\t// write audio wave data into tmp buffer to wait for encode\r\n\t\t\t_mutex->lock();\r\n\t\t\taudioFrames += nNumFramesToRead;\r\n\t\t\tif(audioBufferSize + lBytesToWrite < 5242880)\r\n\t\t\t{\r\n\t\t\t\tmemcpy(audioBuffer + audioBufferSize, pData, lBytesToWrite);\r\n\t\t\t\taudioBufferSize += lBytesToWrite;\r\n\t\t\t}\r\n\t\t\t_mutex->unlock();\r\n\r\n\r\n\t\t\thr = pAudioCaptureClient->ReleaseBuffer(nNumFramesToRead);\r\n\t\t}\r\n\r\n\t\t//-------------------------------------------------------------------------\r\n\t\t// event result if have\r\n\t\t\r\n\t\tdwWaitResult = WaitForMultipleObjects(\r\n\t\t\tARRAYSIZE(waitArray), waitArray,\r\n\t\t\tFALSE, INFINITE\r\n\t\t);\r\n\t\tif (WAIT_OBJECT_0 == dwWaitResult) {\r\n\t\t\tprintf(\"Received stop event after %u passes\\n\", nPasses);\r\n\t\t\tbDone = true;\r\n\t\t\tcontinue; // exits loop\r\n\t\t}\r\n\t\tif (WAIT_OBJECT_0 + 1 != dwWaitResult) {\r\n\t\t\tprintf(\"Unexpected WaitForMultipleObjects return value %u on pass %u\\n\", dwWaitResult, nPasses);\r\n\t\t\treturn;\r\n\t\t}\r\n\t}\r\n\r\n\tprintf(\"Closing audio capture session.\\n\");\r\n\r\n\t//---------------------------------------\r\n\t// finish record data \r\n\t// need send something here to end that stream\r\n\t//---------------------------------------\r\n\t_pAudioClient->Stop();\r\n\tCancelWaitableTimer(hWakeUp);\r\n\tpAudioCaptureClient->Release();\r\n\tCloseHandle(hWakeUp);\r\n\tCoTaskMemFree(pwfx);\r\n\t_pAudioClient->Release();\r\n\tCloseHandle(_stopEvent);\r\n\tCoUninitialize();\r\n\r\n\t_pAudioClient = NULL;\r\n\tfree(audioBuffer);\r\n\tif (_pMMDevice != NULL) _pMMDevice->Release();\r\n\t_pMMDevice = NULL;\r\n}\r\n"
  },
  {
    "path": "PinBoxServer/PinBoxServer/AudioStreamSession.h",
    "content": "#pragma once\r\n#ifndef _AUDIO_STREAM_SESSION_H__\r\n#define _AUDIO_STREAM_SESSION_H__\r\n\r\n#include <windows.h>\r\n#include <mmdeviceapi.h>\r\n#include <audioclient.h>\r\n#include <thread>\r\n#include \"PPMessage.h\"\r\n#include <mutex>\r\n\r\nclass AudioStreamSession\r\n{\r\nprivate:\r\n\tIAudioClient\t\t\t\t*_pAudioClient;\r\n\tIMMDevice\t\t\t\t\t*_pMMDevice;\r\n\tHANDLE\t\t\t\t\t\t_stopEvent;\r\n\tstd::thread\t\t\t\t\t_thread;\r\n\tbool\t\t\t\t\t\t_isPaused = false;\r\n\r\n\r\n\t\r\n\tstd::mutex\t\t\t\t\t*_mutex;\r\n\r\n\r\n\r\n\tvoid\t\t\t\t\t\tuseDefaultDevice();\r\n\r\n\r\npublic:\r\n\r\n\tu32\t\t\t\t\t\t\taudioBufferSize = 0;\r\n\tu8*\t\t\t\t\t\t\taudioBuffer = nullptr;\r\n\r\n\tint\t\t\t\t\t\t\taudioFrames = 0;\r\n\tu32\t\t\t\t\t\t\tsampleRate = 0;\r\npublic:\r\n\r\n\t// Start audio stream session ( init every thing relate )\r\n\tvoid\t\t\t\t\t\tStartAudioStream();\r\n\tvoid\t\t\t\t\t\tPause();\r\n\tvoid\t\t\t\t\t\tResume();\r\n\t// Stop audio stream session ( release every thing relate )\r\n\tvoid\t\t\t\t\t\tStopStreaming();\r\n\r\n\t// Read input amount of stored audio buffer if available \r\n\tvoid\t\t\t\t\t\tReadFromBuffer(u8* outBuf, u32 readSize);\r\n\t// Reset storage buffer to zero ( in case of user want to avoid read old data to avoid laggy )\r\n\tvoid\t\t\t\t\t\tResetStorageBuffer();\r\n\r\n\tvoid\t\t\t\t\t\tloopbackThread();\r\n};\r\n\r\n#endif"
  },
  {
    "path": "PinBoxServer/PinBoxServer/Check-Pinbox-Firewall-Rules.bat",
    "content": "@echo off\ntitle = Pinbox Firewall Setup\necho Checking PinBox Server executable...\nIF NOT EXIST %~dp0PinBoxServer.exe GOTO pinnotdetected\necho PinBox Executable found!\necho Elevating privilegies...\nif not \"%1\"==\"am_admin\" (powershell start -verb runas '%0' am_admin & exit /b)\n\necho Admin detected.\necho Adding rules...\nrem REMEMBER TO CHANGE THE PROGRAM NAME IF THE BINARY FILE NAME CHANGES\nnetsh advfirewall firewall add rule name=\"PinboxOut\" dir=out action=allow  profile=any program= \"%~dp0PinBoxServer.exe\" enable=yes\nnetsh advfirewall firewall add rule name=\"PinBoxIn\" dir=in action=allow  profile=any program= \"%~dp0PinBoxServer.exe\" enable=yes\necho Done\npause\nexit\n\n:pinnotdetected\necho PinBoxServer.exe doesn't exists. Remember to run this script in THE SAME FOLDER AS THE EXECUTABLE.\npause exit"
  },
  {
    "path": "PinBoxServer/PinBoxServer/HubItem.h",
    "content": "#ifndef _PP_HUB_TITEM_H_\n#define _PP_HUB_TITEM_H_\n#include \"PPMessage.h\"\n\nenum HubItemType\n{\n\tHUB_SCREEN = 0x0,\n\tHUB_APP,\n\tHUB_MOVIE,\n};\n\nclass HubItem\n{\npublic:\n\tstd::string\t\t\t\tuuid;\n\n\t// app name\n\tstd::string\t\t\t\tname;\n\n\t// thumb image should be 64x64 png image\n\tu8*\t\t\t\t\t\tthumbBuf;\n\tu32\t\t\t\t\t\tthumbSize;\n\n\tHubItemType\t\t\t\ttype;\n\n\n\t// data\n\tstd::string\t\t\t\tthumbImage;\n\tstd::string\t\t\t\texePath;\n\tstd::string\t\t\t\tprocessName;\n};\n\n#endif"
  },
  {
    "path": "PinBoxServer/PinBoxServer/InputStreamSession.cpp",
    "content": "#include \"stdafx.h\"\r\n#include \"InputStreamSession.h\"\r\n#include <fakeinput.hpp>\r\n#include <libconfig.h++>\r\n#include <iostream>\r\n\r\nInputStreamSession::InputStreamSession()\r\n{\r\n\tinitMapKey();\r\n\tLoadInputConfig();\r\n\tinitVIGEM();\r\n}\r\n\r\n\r\nInputStreamSession::~InputStreamSession()\r\n{\r\n}\r\n\r\nvoid InputStreamSession::LoadInputConfig()\r\n{\r\n\t//-----------------------------------------------\r\n\t// Init default mapping\r\n\t//-----------------------------------------------\r\n\tm_defaultProfile = new KeyMappingProfile();\r\n\tm_defaultProfile->name = \"Default\";\r\n\t//m_defaultProfile->type = \"keyboard\";\r\n\tm_defaultProfile->type = \"x360\";\r\n\tm_defaultProfile->mappings[0] = FakeInput::Key_Z;\r\n\tm_defaultProfile->mappings[1] = FakeInput::Key_X;\r\n\tm_defaultProfile->mappings[10] = FakeInput::Key_A;\r\n\tm_defaultProfile->mappings[11] = FakeInput::Key_S;\r\n\tm_defaultProfile->mappings[9] = FakeInput::Key_Q;\r\n\tm_defaultProfile->mappings[8] = FakeInput::Key_W;\r\n\tm_defaultProfile->mappings[6] = FakeInput::Key_Right;\r\n\tm_defaultProfile->mappings[7] = FakeInput::Key_Down;\r\n\tm_defaultProfile->mappings[5] = FakeInput::Key_Left;\r\n\tm_defaultProfile->mappings[4] = FakeInput::Key_Up;\r\n\tm_defaultProfile->mappings[3] = FakeInput::Key_N;\r\n\tm_defaultProfile->mappings[2] = FakeInput::Key_M;\r\n\tm_defaultProfile->circlePadAsMouse = false;\r\n\t// circle pad act as dpad\r\n\tm_defaultProfile->mappings[30] = FakeInput::Key_Up;\r\n\tm_defaultProfile->mappings[31] = FakeInput::Key_Down;\r\n\tm_defaultProfile->mappings[29] = FakeInput::Key_Left;\r\n\tm_defaultProfile->mappings[28] = FakeInput::Key_Right;\r\n\t// for new3ds only\r\n\tm_defaultProfile->mappings[14] = FakeInput::Key_E;\r\n\tm_defaultProfile->mappings[15] = FakeInput::Key_D;\r\n\tm_keyMappingProfiles[m_defaultProfile->name] = m_defaultProfile;\r\n\r\n\t// x360\r\n\tm_defaultProfile->controller[0] = XUSB_GAMEPAD_A;\r\n\tm_defaultProfile->controller[1] = XUSB_GAMEPAD_B;\r\n\tm_defaultProfile->controller[10] = XUSB_GAMEPAD_X;\r\n\tm_defaultProfile->controller[11] = XUSB_GAMEPAD_Y;\r\n\tm_defaultProfile->controller[9] = XUSB_GAMEPAD_LEFT_SHOULDER;\r\n\tm_defaultProfile->controller[8] = XUSB_GAMEPAD_RIGHT_SHOULDER;\r\n\tm_defaultProfile->controller[6] = XUSB_GAMEPAD_DPAD_UP;\r\n\tm_defaultProfile->controller[7] = XUSB_GAMEPAD_DPAD_DOWN;\r\n\tm_defaultProfile->controller[5] = XUSB_GAMEPAD_DPAD_LEFT;\r\n\tm_defaultProfile->controller[4] = XUSB_GAMEPAD_DPAD_RIGHT;\r\n\tm_defaultProfile->controller[3] = XUSB_GAMEPAD_START;\r\n\tm_defaultProfile->controller[2] = XUSB_GAMEPAD_BACK;\r\n\r\n\t//-----------------------------------------------\r\n\t// load config\r\n\t//-----------------------------------------------\r\n\tlibconfig::Config inputConfigFile;\r\n\ttry\r\n\t{\r\n\t\tinputConfigFile.readFile(\"input.cfg\");\r\n\t}catch(const libconfig::FileIOException& fioex)\r\n\t{\r\n\t\tstd::cout << \"[Error] Input config file was not found.\" << std::endl << std::flush;\r\n\t}\r\n\tcatch (const libconfig::ParseException& pex)\r\n\t{\r\n\t\tstd::cout << \"[Error] Input config file corrupted.\" << std::endl << std::flush;\r\n\t}\r\n\r\n\t// load input profiles\r\n\tconst libconfig::Setting& root = inputConfigFile.getRoot();\r\n\tconst libconfig::Setting& inputProfiles = root[\"input_profiles\"];\r\n\tint profilesCount = inputProfiles.getLength();\r\n\tfor (int i = 0; i < profilesCount; ++i)\r\n\t{\r\n\t\tKeyMappingProfile* profile = new KeyMappingProfile();\r\n\r\n\t\tstd::string name, type;\r\n\r\n\t\tif(!inputProfiles[i].lookupValue(\"name\", name) && \r\n\t\t\tinputProfiles[i].lookupValue(\"type\", type))\r\n\t\t{\r\n\t\t\tcontinue;\r\n\t\t}\r\n\r\n\t\tprofile->name = name;\r\n\t\tprofile->type = type;\r\n\r\n\t\tif(type == \"keyboard\")\r\n\t\t{\r\n\t\t\tstd::string btn_A, btn_B, btn_X, btn_Y;\r\n\t\t\tstd::string btn_DPAD_UP, btn_DPAD_DOWN, btn_DPAD_LEFT, btn_DPAD_RIGHT;\r\n\t\t\tstd::string btn_L, btn_R, btn_ZL, btn_ZR;\r\n\t\t\tstd::string btn_START, btn_SELECT;\r\n\t\t\tbool circle_as_mouse, zl_zr_as_mouse_btn;\r\n\r\n\t\t\tif (!inputProfiles[i].lookupValue(\"btn_A\", btn_A) &&\r\n\t\t\t\tinputProfiles[i].lookupValue(\"btn_B\", btn_B) &&\r\n\t\t\t\tinputProfiles[i].lookupValue(\"btn_X\", btn_X) &&\r\n\t\t\t\tinputProfiles[i].lookupValue(\"btn_Y\", btn_Y) &&\r\n\t\t\t\tinputProfiles[i].lookupValue(\"btn_DPAD_UP\", btn_DPAD_UP) &&\r\n\t\t\t\tinputProfiles[i].lookupValue(\"btn_DPAD_DOWN\", btn_DPAD_DOWN) &&\r\n\t\t\t\tinputProfiles[i].lookupValue(\"btn_DPAD_LEFT\", btn_DPAD_LEFT) &&\r\n\t\t\t\tinputProfiles[i].lookupValue(\"btn_DPAD_RIGHT\", btn_DPAD_RIGHT) &&\r\n\t\t\t\tinputProfiles[i].lookupValue(\"btn_L\", btn_L) &&\r\n\t\t\t\tinputProfiles[i].lookupValue(\"btn_R\", btn_R) &&\r\n\t\t\t\tinputProfiles[i].lookupValue(\"btn_START\", btn_START) &&\r\n\t\t\t\tinputProfiles[i].lookupValue(\"btn_SELECT\", btn_SELECT))\r\n\t\t\t{\r\n\t\t\tcontinue;\r\n\t\t\t}\r\n\r\n\t\t\tprofile->mappings[0] = m_keyMapping[btn_A];\r\n\t\t\tprofile->mappings[1] = m_keyMapping[btn_B];\r\n\t\t\tprofile->mappings[10] = m_keyMapping[btn_X];\r\n\t\t\tprofile->mappings[11] = m_keyMapping[btn_Y];\r\n\t\t\tprofile->mappings[9] = m_keyMapping[btn_L];\r\n\t\t\tprofile->mappings[8] = m_keyMapping[btn_R];\r\n\t\t\tprofile->mappings[14] = m_keyMapping[btn_ZL];\r\n\t\t\tprofile->mappings[15] = m_keyMapping[btn_ZR];\r\n\t\t\tprofile->mappings[6] = m_keyMapping[btn_DPAD_RIGHT];\r\n\t\t\tprofile->mappings[7] = m_keyMapping[btn_DPAD_DOWN];\r\n\t\t\tprofile->mappings[5] = m_keyMapping[btn_DPAD_LEFT];\r\n\t\t\tprofile->mappings[4] = m_keyMapping[btn_DPAD_UP];\r\n\t\t\tprofile->mappings[3] = m_keyMapping[btn_START];\r\n\t\t\tprofile->mappings[2] = m_keyMapping[btn_SELECT];\r\n\r\n\t\t\t// optional\r\n\t\t\t// load for zl and zr\r\n\t\t\tif (inputProfiles[i].lookupValue(\"circle_pad_as_mouse\", circle_as_mouse))\r\n\t\t\t{\r\n\t\t\t\tinputProfiles[i].lookupValue(\"zl_zr_as_mouse_button\", zl_zr_as_mouse_btn);\r\n\t\t\t\tprofile->circlePadAsMouse = circle_as_mouse;\r\n\t\t\t\tprofile->ZLRAsMouseButton = zl_zr_as_mouse_btn;\r\n\t\t\t}\r\n\t\t\t// optional\r\n\t\t\t// load for zl and zr\r\n\t\t\tif (!zl_zr_as_mouse_btn) {\r\n\t\t\t\tif (inputProfiles[i].lookupValue(\"btn_ZL\", btn_ZL)) profile->mappings[14] = m_keyMapping[btn_ZL];\r\n\t\t\t\tif (inputProfiles[i].lookupValue(\"btn_ZR\", btn_ZR)) profile->mappings[15] = m_keyMapping[btn_ZR];\r\n\t\t\t}\r\n\r\n\t\t\tm_keyMappingProfiles[profile->name] = profile;\r\n\t\t}else if(type == \"x360\")\r\n\t\t{\r\n\t\t\tprofile->controller[0] = XUSB_GAMEPAD_A;\r\n\t\t\tprofile->controller[1] = XUSB_GAMEPAD_B;\r\n\t\t\tprofile->controller[10] = XUSB_GAMEPAD_X;\r\n\t\t\tprofile->controller[11] = XUSB_GAMEPAD_Y;\r\n\t\t\tprofile->controller[9] = XUSB_GAMEPAD_LEFT_SHOULDER;\r\n\t\t\tprofile->controller[8] = XUSB_GAMEPAD_RIGHT_SHOULDER;\r\n\t\t\tprofile->controller[6] = XUSB_GAMEPAD_DPAD_UP;\r\n\t\t\tprofile->controller[7] = XUSB_GAMEPAD_DPAD_DOWN;\r\n\t\t\tprofile->controller[5] = XUSB_GAMEPAD_DPAD_LEFT;\r\n\t\t\tprofile->controller[4] = XUSB_GAMEPAD_DPAD_RIGHT;\r\n\t\t\tprofile->controller[3] = XUSB_GAMEPAD_START;\r\n\t\t\tprofile->controller[2] = XUSB_GAMEPAD_BACK;\r\n\t\t}\r\n\t\t\r\n\t}\r\n}\r\n\r\nvoid InputStreamSession::UpdateInput(uint32_t down, uint32_t up, short cx, short cy, short ctx, short cty)\r\n{\r\n\tm_OldDown = down;\r\n\tm_OldUp = up;\r\n\tm_OldCX = cx;\r\n\tm_OldCY = cy;\r\n\tm_OldCTX = ctx;\r\n\tm_OldCTY = cty;\r\n\t/*std::cout << \"Input Down:\" << down << \r\n\t\t\" - Up:\" << up << \r\n\t\t\" - CX:\" << cx <<\r\n\t\t\" - CY:\" << cy <<\r\n\t\t\" - CTX:\" << ctx << \r\n\t\t\" - CTY:\" << cty << std::endl << std::flush;*/\r\n\tProcessInput();\r\n}\r\nvoid InputStreamSession::ProcessInput()\r\n{\r\n\t// validate profile\r\n\tif (m_currentProfile.empty()) m_currentProfile = \"Default\";\r\n\tif (m_keyMappingProfiles.find(m_currentProfile) == m_keyMappingProfiles.end()) m_currentProfile = \"Default\";\r\n\r\n\tKeyMappingProfile *profile = m_keyMappingProfiles[m_currentProfile];\r\n\r\n\tif(profile->type == \"keyboard\")\r\n\t{\r\n\t\t// Key Input\r\n\t\tfor (uint8_t i = 0; i < 32; ++i)\r\n\t\t{\r\n\t\t\tif (m_OldDown & BIT(i))\r\n\t\t\t{\r\n\t\t\t\tif (profile->mappings.find(i) != profile->mappings.end())\r\n\t\t\t\t{\r\n\t\t\t\t\tFakeInput::Keyboard::pressKey(profile->mappings[i]);\r\n\t\t\t\t}\r\n\t\t\t}else\r\n\t\t\t{\r\n\t\t\t\tif (profile->mappings.find(i) != profile->mappings.end())\r\n\t\t\t\t{\r\n\t\t\t\t\tFakeInput::Keyboard::releaseKey(profile->mappings[i]);\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\tif (profile->circlePadAsMouse)\r\n\t\t{\r\n\t\t\t// Circle Pad\r\n\t\t\tshort _cx = (c_cpadMax - m_OldCX);\r\n\t\t\tshort _cy = (c_cpadMax - m_OldCY);\r\n\t\t\tFakeInput::Mouse::move(_cx, _cy);\r\n\t\t}\r\n\t}else if(profile->type == \"x360\")\r\n\t{\r\n\t\tm_x360Report.wButtons = 0;\r\n\t\tm_x360Report.bLeftTrigger = 0;\r\n\t\tm_x360Report.bRightTrigger = 0;\r\n\t\t// Key Input\r\n\t\tfor (uint8_t i = 0; i < 32; ++i)\r\n\t\t{\r\n\t\t\tif (m_OldDown & BIT(i))\r\n\t\t\t{\r\n\t\t\t\tif (profile->controller.find(i) != profile->controller.end())\r\n\t\t\t\t{\r\n\t\t\t\t\tm_x360Report.wButtons |= profile->controller[i];\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\t// ZL\r\n\t\tif (m_OldDown & BIT(14))\r\n\t\t{\r\n\t\t\tm_x360Report.bLeftTrigger = 250;\r\n\t\t}\r\n\t\tif(m_OldUp & BIT(14))\r\n\t\t{\r\n\t\t\tm_x360Report.bLeftTrigger = 0;\r\n\t\t}\r\n\r\n\t\t// ZR\r\n\t\tif (m_OldDown & BIT(15))\r\n\t\t{\r\n\t\t\tm_x360Report.bRightTrigger = 250;\r\n\t\t}\r\n\t\tif (m_OldUp & BIT(15))\r\n\t\t{\r\n\t\t\tm_x360Report.bRightTrigger = 0;\r\n\t\t}\r\n\r\n\t\tconst short MAX_STICK_RANGE = 30000;\r\n\t\tconst float ADDITION_CSTICK = 0.2f;\r\n\r\n\t\t// Left stick\r\n\t\tif (m_OldCX > 0 && m_OldCX <= c_cpadDeadZone) m_OldCX = 0;\r\n\t\tif (m_OldCX < 0 && m_OldCX >= -c_cpadDeadZone) m_OldCX = 0;\r\n\r\n\t\tif (m_OldCY > 0 && m_OldCY <= c_cpadDeadZone) m_OldCY = 0;\r\n\t\tif (m_OldCY < 0 && m_OldCY >= -c_cpadDeadZone) m_OldCY = 0;\r\n\r\n\t\tfloat pCx = static_cast<float>(m_OldCX) / static_cast<float>(c_cpadMax);\r\n\t\tfloat pCy = static_cast<float>(m_OldCY) / static_cast<float>(c_cpadMax);\r\n\t\t\r\n\t\tif (pCx > 1.0f) pCx = 1.0f;\r\n\t\tif (pCx < -1.0f) pCx = -1.0f;\r\n\t\tif (pCy > 1.0f) pCy = 1.0f;\r\n\t\tif (pCy < -1.0f) pCy = -1.0f;\r\n\r\n\t\tm_x360Report.sThumbLX = (short)(pCx * MAX_STICK_RANGE);\r\n\t\tm_x360Report.sThumbLY = (short)(pCy * MAX_STICK_RANGE);\r\n\r\n\t\t// Right stick\r\n\t\tif (m_OldCTX > 0 && m_OldCTX < c_cpadDeadZone) m_OldCTX = 0;\r\n\t\tif (m_OldCTY > 0 && m_OldCTY < c_cpadDeadZone) m_OldCTY = 0;\r\n\t\tif (m_OldCTX < 0 && m_OldCTX > -c_cpadDeadZone) m_OldCTX = 0;\r\n\t\tif (m_OldCTY < 0 && m_OldCTY > -c_cpadDeadZone) m_OldCTY = 0;\r\n\r\n\t\tfloat pCtx = (static_cast<float>(m_OldCTX) / static_cast<float>(c_cpadMax)) + ADDITION_CSTICK;\r\n\t\tfloat pCty = (static_cast<float>(m_OldCTY) / static_cast<float>(c_cpadMax)) + ADDITION_CSTICK;\r\n\t\t\r\n\t\tint newCTX = (int)(pCtx * MAX_STICK_RANGE);\r\n\t\tint newCTY = (int)(pCty * MAX_STICK_RANGE);\r\n\t\tif (newCTX > MAX_STICK_RANGE) newCTX = MAX_STICK_RANGE;\r\n\t\tif (newCTX < -MAX_STICK_RANGE) newCTX = -MAX_STICK_RANGE;\r\n\t\tif (newCTY > MAX_STICK_RANGE) newCTY = MAX_STICK_RANGE;\r\n\t\tif (newCTY < -MAX_STICK_RANGE) newCTY = -MAX_STICK_RANGE;\r\n\r\n\t\tm_x360Report.sThumbRX = newCTX;\r\n\t\tm_x360Report.sThumbRY = newCTY;\r\n\r\n\t\t// update\r\n\t\tif(!VIGEM_SUCCESS(vigem_target_x360_update(m_vDriver, m_x360Controller, m_x360Report)))\r\n\t\t{\r\n\t\t\tstd::cout << \"[Error] Error when submit report update to X360.\" << std::endl << std::flush;\r\n\t\t}\r\n\t\tm_OldUp = 0;\r\n\t}\r\n\t\r\n}\r\n\r\nvoid InputStreamSession::ChangeInputProfile(std::string profileName)\r\n{\r\n\r\n}\r\n\r\nvoid InputStreamSession::initMapKey()\r\n{\r\n\tm_keyMapping[\"A\"] = FakeInput::Key_A;\r\n\tm_keyMapping[\"B\"] = FakeInput::Key_B;\r\n\tm_keyMapping[\"C\"] = FakeInput::Key_C;\r\n\tm_keyMapping[\"D\"] = FakeInput::Key_D;\r\n\tm_keyMapping[\"E\"] = FakeInput::Key_E;\r\n\tm_keyMapping[\"F\"] = FakeInput::Key_F;\r\n\tm_keyMapping[\"G\"] = FakeInput::Key_G;\r\n\tm_keyMapping[\"H\"] = FakeInput::Key_H;\r\n\tm_keyMapping[\"I\"] = FakeInput::Key_I;\r\n\tm_keyMapping[\"J\"] = FakeInput::Key_J;\r\n\tm_keyMapping[\"K\"] = FakeInput::Key_K;\r\n\tm_keyMapping[\"L\"] = FakeInput::Key_L;\r\n\tm_keyMapping[\"M\"] = FakeInput::Key_M;\r\n\tm_keyMapping[\"N\"] = FakeInput::Key_N;\r\n\tm_keyMapping[\"O\"] = FakeInput::Key_O;\r\n\tm_keyMapping[\"P\"] = FakeInput::Key_P;\r\n\tm_keyMapping[\"Q\"] = FakeInput::Key_Q;\r\n\tm_keyMapping[\"R\"] = FakeInput::Key_R;\r\n\tm_keyMapping[\"S\"] = FakeInput::Key_S;\r\n\tm_keyMapping[\"T\"] = FakeInput::Key_T;\r\n\tm_keyMapping[\"U\"] = FakeInput::Key_U;\r\n\tm_keyMapping[\"V\"] = FakeInput::Key_V;\r\n\tm_keyMapping[\"W\"] = FakeInput::Key_W;\r\n\tm_keyMapping[\"X\"] = FakeInput::Key_X;\r\n\tm_keyMapping[\"Y\"] = FakeInput::Key_Y;\r\n\tm_keyMapping[\"Z\"] = FakeInput::Key_Z;\r\n\tm_keyMapping[\"0\"] = FakeInput::Key_0;\r\n\tm_keyMapping[\"1\"] = FakeInput::Key_1;\r\n\tm_keyMapping[\"2\"] = FakeInput::Key_2;\r\n\tm_keyMapping[\"3\"] = FakeInput::Key_3;\r\n\tm_keyMapping[\"4\"] = FakeInput::Key_4;\r\n\tm_keyMapping[\"5\"] = FakeInput::Key_5;\r\n\tm_keyMapping[\"6\"] = FakeInput::Key_6;\r\n\tm_keyMapping[\"7\"] = FakeInput::Key_7;\r\n\tm_keyMapping[\"8\"] = FakeInput::Key_8;\r\n\tm_keyMapping[\"9\"] = FakeInput::Key_9;\r\n\tm_keyMapping[\"F1\"] = FakeInput::Key_F1;\r\n\tm_keyMapping[\"F2\"] = FakeInput::Key_F2;\r\n\tm_keyMapping[\"F3\"] = FakeInput::Key_F3;\r\n\tm_keyMapping[\"F4\"] = FakeInput::Key_F4;\r\n\tm_keyMapping[\"F5\"] = FakeInput::Key_F5;\r\n\tm_keyMapping[\"F6\"] = FakeInput::Key_F6;\r\n\tm_keyMapping[\"F7\"] = FakeInput::Key_F7;\r\n\tm_keyMapping[\"F8\"] = FakeInput::Key_F8;\r\n\tm_keyMapping[\"F9\"] = FakeInput::Key_F9;\r\n\tm_keyMapping[\"F10\"] = FakeInput::Key_F10;\r\n\tm_keyMapping[\"F11\"] = FakeInput::Key_F11;\r\n\tm_keyMapping[\"F12\"] = FakeInput::Key_F12;\r\n\tm_keyMapping[\"Escape\"] = FakeInput::Key_Escape;\r\n\tm_keyMapping[\"Space\"] = FakeInput::Key_Space;\r\n\tm_keyMapping[\"Return\"] = FakeInput::Key_Return;\r\n\tm_keyMapping[\"Backspace\"] = FakeInput::Key_Backspace;\r\n\tm_keyMapping[\"Tab\"] = FakeInput::Key_Tab;\r\n\tm_keyMapping[\"Shift_L\"] = FakeInput::Key_Shift_L;\r\n\tm_keyMapping[\"Shift_R\"] = FakeInput::Key_Shift_R;\r\n\tm_keyMapping[\"Control_L\"] = FakeInput::Key_Control_L;\r\n\tm_keyMapping[\"Control_R\"] = FakeInput::Key_Control_R;\r\n\tm_keyMapping[\"Alt_L\"] = FakeInput::Key_Alt_L;\r\n\tm_keyMapping[\"Alt_R\"] = FakeInput::Key_Alt_R;\r\n\tm_keyMapping[\"CapsLock\"] = FakeInput::Key_CapsLock;\r\n\tm_keyMapping[\"NumLock\"] = FakeInput::Key_NumLock;\r\n\tm_keyMapping[\"ScrollLock\"] = FakeInput::Key_ScrollLock;\r\n\tm_keyMapping[\"PrintScreen\"] = FakeInput::Key_PrintScreen;\r\n\tm_keyMapping[\"Insert\"] = FakeInput::Key_Insert;\r\n\tm_keyMapping[\"Delete\"] = FakeInput::Key_Delete;\r\n\tm_keyMapping[\"PageUP\"] = FakeInput::Key_PageUP;\r\n\tm_keyMapping[\"PageDown\"] = FakeInput::Key_PageDown;\r\n\tm_keyMapping[\"Home\"] = FakeInput::Key_Home;\r\n\tm_keyMapping[\"End\"] = FakeInput::Key_End;\r\n\tm_keyMapping[\"Left\"] = FakeInput::Key_Left;\r\n\tm_keyMapping[\"Right\"] = FakeInput::Key_Right;\r\n\tm_keyMapping[\"Up\"] = FakeInput::Key_Up;\r\n\tm_keyMapping[\"Down\"] = FakeInput::Key_Down;\r\n\tm_keyMapping[\"Numpad0\"] = FakeInput::Key_Numpad0;\r\n\tm_keyMapping[\"Numpad1\"] = FakeInput::Key_Numpad1;\r\n\tm_keyMapping[\"Numpad2\"] = FakeInput::Key_Numpad2;\r\n\tm_keyMapping[\"Numpad3\"] = FakeInput::Key_Numpad3;\r\n\tm_keyMapping[\"Numpad4\"] = FakeInput::Key_Numpad4;\r\n\tm_keyMapping[\"Numpad5\"] = FakeInput::Key_Numpad5;\r\n\tm_keyMapping[\"Numpad6\"] = FakeInput::Key_Numpad6;\r\n\tm_keyMapping[\"Numpad7\"] = FakeInput::Key_Numpad7;\r\n\tm_keyMapping[\"Numpad8\"] = FakeInput::Key_Numpad8;\r\n\tm_keyMapping[\"Numpad9\"] = FakeInput::Key_Numpad9;\r\n\tm_keyMapping[\"NumpadAdd\"] = FakeInput::Key_NumpadAdd;\r\n\tm_keyMapping[\"NumpadSubtract\"] = FakeInput::Key_NumpadSubtract;\r\n\tm_keyMapping[\"NumpadMultiply\"] = FakeInput::Key_NumpadMultiply;\r\n\tm_keyMapping[\"NumpadDivide\"] = FakeInput::Key_NumpadDivide;\r\n\tm_keyMapping[\"NumpadDecimal\"] = FakeInput::Key_NumpadDecimal;\r\n\tm_keyMapping[\"NumpadEnter\"] = FakeInput::Key_NumpadEnter;\r\n}\r\n\r\n\r\nvoid InputStreamSession::initVIGEM()\r\n{\r\n\tm_Xbox360Enable = false;\r\n\t//\n\t// Allocate driver connection object\n\t// \r\n\tm_vDriver = vigem_alloc();\r\n\r\n\t//\n\t// Attempt to connect to bus driver\n\t// \r\n\tif(!VIGEM_SUCCESS(vigem_connect(m_vDriver)))\r\n\t{\r\n\t\tstd::cout << \"[Error] Can't connect to Virtual Controller.\" << std::endl << std::flush;\r\n\t\tstd::cout << \"-> Please ask on Discord for more information: https://discordapp.com/channels/340110838947905538/340110838947905538.\" << std::endl << std::flush;\r\n\t\treturn;\r\n\t}\r\n\tstd::cout << \"[X360] Connected successfully.\" << std::endl << std::flush;\r\n\t//\n\t// Allocate target device object of type Xbox 360 Controller\n\t// \n\tm_x360Controller = vigem_target_x360_alloc();\r\n\r\n\t//\n\t// Add new Xbox 360 device to bus.\n\t// \n\t// This call blocks until the device reached working state \n\t// or an error occurred.\n\t// \n\tif (!VIGEM_SUCCESS(vigem_target_add(m_vDriver, m_x360Controller)))\n\t{\r\n\t\tstd::cout << \"[Error] Couldn't add virtual X360 device.\" << std::endl << std::flush;\n\t\treturn;\n\t}\r\n\tstd::cout << \"[X360] Added virtual x360 device successfully.\" << std::endl << std::flush;\r\n\r\n\tXUSB_REPORT_INIT(&m_x360Report);\r\n\tm_Xbox360Enable = true;\r\n}"
  },
  {
    "path": "PinBoxServer/PinBoxServer/InputStreamSession.h",
    "content": "#pragma once\r\n#ifndef _PP_INPUT_STREAM_SESSION_H_\r\n#define _PP_INPUT_STREAM_SESSION_H_\r\n\r\n#define BIT(n) (1U<<(n))\r\n#include <key.hpp>\r\n#include <map>\r\n#include \"ViGEmClient.h\"\r\n\r\n/// Key values.\nenum\n{\n\tKEY_A = BIT(0),       ///< A\n\tKEY_B = BIT(1),       ///< B\n\tKEY_SELECT = BIT(2),       ///< Select\n\tKEY_START = BIT(3),       ///< Start\n\tKEY_DRIGHT = BIT(4),       ///< D-Pad Right\n\tKEY_DLEFT = BIT(5),       ///< D-Pad Left\n\tKEY_DUP = BIT(6),       ///< D-Pad Up\n\tKEY_DDOWN = BIT(7),       ///< D-Pad Down\n\tKEY_R = BIT(8),       ///< R\n\tKEY_L = BIT(9),       ///< L\n\tKEY_X = BIT(10),      ///< X\n\tKEY_Y = BIT(11),      ///< Y\n\tKEY_ZL = BIT(14),      ///< ZL (New 3DS only)\n\tKEY_ZR = BIT(15),      ///< ZR (New 3DS only)\n\tKEY_TOUCH = BIT(20),      ///< Touch (Not actually provided by HID)\n\tKEY_CSTICK_RIGHT = BIT(24), ///< C-Stick Right (New 3DS only)\n\tKEY_CSTICK_LEFT = BIT(25), ///< C-Stick Left (New 3DS only)\n\tKEY_CSTICK_UP = BIT(26), ///< C-Stick Up (New 3DS only)\n\tKEY_CSTICK_DOWN = BIT(27), ///< C-Stick Down (New 3DS only)\n\tKEY_CPAD_RIGHT = BIT(28),   ///< Circle Pad Right\n\tKEY_CPAD_LEFT = BIT(29),   ///< Circle Pad Left\n\tKEY_CPAD_UP = BIT(30),   ///< Circle Pad Up\n\tKEY_CPAD_DOWN = BIT(31),   ///< Circle Pad Down\n\n\t// Generic catch-all directions\n\tKEY_UP = KEY_DUP | KEY_CPAD_UP,    ///< D-Pad Up or Circle Pad Up\n\tKEY_DOWN = KEY_DDOWN | KEY_CPAD_DOWN,  ///< D-Pad Down or Circle Pad Down\n\tKEY_LEFT = KEY_DLEFT | KEY_CPAD_LEFT,  ///< D-Pad Left or Circle Pad Left\n\tKEY_RIGHT = KEY_DRIGHT | KEY_CPAD_RIGHT, ///< D-Pad Right or Circle Pad Right\n};\r\nstruct KeyMappingProfile\r\n{\r\n\tstd::string\t\t\t\t\t\t\t\tname = \"\";\r\n\tstd::map<uint8_t, FakeInput::Key>\t\tmappings;\r\n\tstd::map<uint8_t, _XUSB_BUTTON>\t\t\tcontroller;\r\n\tbool\t\t\t\t\t\t\t\t\tcirclePadAsMouse = false;\r\n\tbool\t\t\t\t\t\t\t\t\tZLRAsMouseButton = false;\r\n\tstd::string\t\t\t\t\t\t\t\ttype = \"\";\r\n};\r\n\r\nclass InputStreamSession\r\n{\r\nprivate:\r\n\tconst short\t\t\t\t\t\t\t\t\tc_cpadDeadZone = 15;\r\n\tconst short\t\t\t\t\t\t\t\t\tc_cpadMin = 0;\r\n\tconst short\t\t\t\t\t\t\t\t\tc_cpadMax = 156;\r\n\r\n\tuint32_t\t\t\t\t\t\t\t\t\tm_OldDown;\r\n\tuint32_t\t\t\t\t\t\t\t\t\tm_OldUp;\r\n\tshort\t\t\t\t\t\t\t\t\t\tm_OldCX;\r\n\tshort\t\t\t\t\t\t\t\t\t\tm_OldCY;\r\n\tshort\t\t\t\t\t\t\t\t\t\tm_OldCTX;\r\n\tshort\t\t\t\t\t\t\t\t\t\tm_OldCTY;\r\n\r\n\tstd::string\t\t\t\t\t\t\t\t\tm_currentProfile = \"Default\";\r\n\tKeyMappingProfile*\t\t\t\t\t\t\tm_defaultProfile;\r\n\tstd::map<std::string, KeyMappingProfile*>\tm_keyMappingProfiles;\r\n\tstd::map<std::string, FakeInput::Key>\t\tm_keyMapping;\r\n\tvoid initMapKey();\r\n\r\n\r\n\tvoid initVIGEM();\r\n\tbool m_Xbox360Enable = false;\r\n\tPVIGEM_CLIENT m_vDriver;\r\n\t// xbox 360 controller\r\n\tPVIGEM_TARGET m_x360Controller;\r\n\tXUSB_REPORT m_x360Report;\r\n\r\npublic:\r\n\tInputStreamSession();\r\n\t~InputStreamSession();\r\n\r\n\tvoid LoadInputConfig();\r\n\r\n\tvoid UpdateInput(uint32_t down, uint32_t up, short cx, short cy, short ctx, short cty);\r\n\tvoid ProcessInput();\r\n\tvoid ChangeInputProfile(std::string\tprofileName);\r\n};\r\n\r\n#endif\r\n"
  },
  {
    "path": "PinBoxServer/PinBoxServer/PPClientSession.cpp",
    "content": "#include \"stdafx.h\"\r\n#include \"PPClientSession.h\"\r\n#include \"PPMessage.h\"\r\n#include \"PPServer.h\"\r\n#include \"HubItem.h\"\r\n#include \"ServerConfig.h\"\r\n\r\nvoid PPClientSession::InitSession(evpp::TCPConnPtr conn, PPServer* parent)\r\n{\r\n\t_connection = conn;\r\n\t_server = parent;\r\n\t// static buffer\r\n\t_receivedBuffer = (u8*)malloc(BUFFER_SIZE);\r\n\t_receivedBufferSize = 0;\r\n\t// default wait size for authentication\r\n\t_waitForSize = 9;\r\n}\r\n\r\nvoid PPClientSession::DisconnectFromServer()\r\n{\r\n\tif (_tmpMessage) delete _tmpMessage;\r\n\t//TODO: stop all stream\r\n\r\n}\r\n\r\nvoid PPClientSession::PrepareVideoPacketAndSend(u8* buffer, int bufferSize)\r\n{\r\n\tsendMessageWithCodeAndData(MSG_CODE_REQUEST_NEW_SCREEN_FRAME, buffer, bufferSize);\r\n}\r\n\r\nvoid PPClientSession::PrepareAudioPacketAndSend(u8* buffer, int bufferSize, uint64_t pts)\r\n{\r\n\tsendMessageWithCodeAndData(MSG_CODE_REQUEST_NEW_AUDIO_FRAME, buffer, bufferSize);\r\n}\r\n\r\nvoid PPClientSession::ProcessMessage(evpp::Buffer* msg)\r\n{\r\n\t//------------------------------------------------\r\n\t// merge msg data into buffer\r\n\tmemcpy(_receivedBuffer + _receivedBufferSize, msg->data(), msg->size());\r\n\t// increment buffer size\r\n\t_receivedBufferSize += msg->size();\r\n\r\n\tif (_receivedBufferSize < _waitForSize || _waitForSize == 0) return;\r\n\r\n\t// process thought data buffer\r\n\tint dataAfterProcessed = _receivedBufferSize - _waitForSize;\r\n\tdo\r\n\t{\r\n\t\t// calculate data left\r\n\t\tu32 lastWaitForSize = _waitForSize;\r\n\r\n\t\t// process message data \r\n\t\t// NOTE: _waitForSize should be changed inside this function\r\n\t\tProcessIncommingMessage(_receivedBuffer, lastWaitForSize);\r\n\r\n\t\t// shifting memory\r\n\t\tmemcpy(_receivedBuffer, _receivedBuffer + lastWaitForSize, dataAfterProcessed);\r\n\r\n\t\t// update data left\r\n\t\t_receivedBufferSize = dataAfterProcessed;\r\n\t\tif (_receivedBufferSize < _waitForSize || _waitForSize == 0) break;\r\n\t\tdataAfterProcessed = _receivedBufferSize - _waitForSize;\r\n\t} while (dataAfterProcessed >= 0);\r\n}\r\n\r\nvoid PPClientSession::ProcessIncommingMessage(u8* buffer, u32 size)\r\n{\r\n\tif(!IsAuthenticated())\r\n\t{\r\n\t\t//----------------------------------------------------\r\n\t\t// parse for header part\r\n\t\tPPMessage* autheMsg = new PPMessage();\r\n\t\tif (!autheMsg->ParseHeader(buffer))\r\n\t\t{\r\n\t\t\tstd::cout << \"[Authentication Error] Vaidation code incorrect \" << std::endl;\r\n\t\t\tsendMessageWithCode(MSG_CODE_RESULT_AUTHENTICATION_FAILED);\r\n\t\t\treturn;\r\n\t\t}\r\n\t\t//----------------------------------------------------\r\n\t\t// validate authentication\r\n\t\tif (autheMsg->GetMessageCode() != MSG_CODE_REQUEST_AUTHENTICATION_SESSION) {\n\t\t\tstd::cout << \"[Authentication Error] Invalid session type: \" << autheMsg->GetMessageCode() << std::endl;\n\t\t\tsendMessageWithCode(MSG_CODE_RESULT_AUTHENTICATION_FAILED);\n\t\t\treturn;\n\t\t}\r\n\r\n\t\t_server->ScreenCapturer->registerClientSession(this);\r\n\t\t_authenticated = true;\r\n\t\t// set waiting to a header msg\r\n\t\t_currentReadState = PPREQUEST_HEADER;\r\n\t\t_waitForSize = MSG_COMMAND_SIZE;\r\n\t\t//----------------------------------------------------\r\n\t\t// send back to client to validate authentication successfully\r\n\t\tstd::cout << \"[Authentication Successed] Session: #\" << _connection->remote_addr() << std::endl;\r\n\t\tsendMessageWithCode(MSG_CODE_RESULT_AUTHENTICATION_SUCCESS);\r\n\t}else\r\n\t{\r\n\t\tif (!_tmpMessage) _tmpMessage = new PPMessage();\r\n\t\tif (_currentReadState == PPREQUEST_HEADER)\r\n\t\t{\r\n\t\t\tif (!_tmpMessage->ParseHeader(buffer))\r\n\t\t\t{\r\n\t\t\t\tstd::cout << \"[Parse Message Error] Vaidation code incorrect - are you sure current is header part ?\" << std::endl;\r\n\t\t\t\t_tmpMessage->ClearHeader();\r\n\t\t\t\treturn;\r\n\t\t\t}\r\n\t\t\t//----------------------------------------------------\r\n\t\t\t// switch to read body part\r\n\t\t\t_currentReadState = PPREQUEST_BODY;\r\n\t\t\t_waitForSize = _tmpMessage->GetContentSize();\r\n\t\t\t//----------------------------------------------------\r\n\t\t\t// pre process header code if need\r\n\t\t\tprocessMessageHeader(_tmpMessage->GetMessageCode());\r\n\t\t\t//----------------------------------------------------\r\n\t\t\t// it content size of message is zero so we start \r\n\t\t\t// wait for new message\r\n\t\t\tif (_tmpMessage->GetContentSize() == 0) {\r\n\t\t\t\t_currentReadState = PPREQUEST_HEADER;\r\n\t\t\t\t_waitForSize = MSG_COMMAND_SIZE;\r\n\t\t\t}\r\n\t\t}\r\n\t\telse if (_currentReadState == PPREQUEST_BODY)\r\n\t\t{\r\n\t\t\tif (size == _tmpMessage->GetContentSize())\r\n\t\t\t{\r\n\t\t\t\t// process body buffer\r\n\t\t\t\tprocessMessageBody(buffer, _tmpMessage->GetMessageCode());\r\n\t\t\t\t// switch to read next header part\r\n\t\t\t\t_currentReadState = PPREQUEST_HEADER;\r\n\t\t\t\t_waitForSize = MSG_COMMAND_SIZE;\r\n\t\t\t\t_tmpMessage->ClearHeader();\r\n\t\t\t}\r\n\t\t}\r\n\t}\r\n}\r\n\r\n\r\n\r\nvoid PPClientSession::sendMessageWithCode(u8 code)\r\n{\r\n\tPPMessage* msg = new PPMessage();\r\n\tmsg->BuildMessageHeader(code);\r\n\tvoid* buffer = msg->BuildMessageEmpty();\r\n\t_connection->Send(buffer, 9);\r\n\tfree(buffer);\r\n}\r\n\r\nvoid PPClientSession::sendMessageWithCodeAndData(u8 code, u8* buffer, size_t bufferSize)\r\n{\r\n\tPPMessage* msg = new PPMessage();\r\n\tmsg->BuildMessageHeader(code);\r\n\tvoid* result = msg->BuildMessage(buffer, bufferSize);\r\n\t_connection->Send(result, msg->GetMessageSize());\r\n\tfree(result);\r\n}\r\n\r\nvoid PPClientSession::processMessageHeader(u8 code)\r\n{\r\n\tswitch (code)\r\n\t{\r\n\t//case MSG_CODE_REQUEST_START_SCREEN_CAPTURE:\r\n\t//\tstd::cout << \"Client send COMMAND: Start Stream\" << std::endl;\r\n\t//\t_server->ScreenCapturer->startStream();\r\n\t//\tbreak;\r\n\t//case MSG_CODE_REQUEST_STOP_SCREEN_CAPTURE:\r\n\t//\tstd::cout << \"Client send COMMAND: Stop Stream\" << std::endl;\r\n\t//\t_server->ScreenCapturer->stopStream();\r\n\t//\tbreak;\r\n\tcase MSG_CODE_REQUEST_SCREEN_RECEIVED_FRAME: \r\n\r\n\t\tbreak;\r\n\tcase MSG_CODE_REQUEST_RECEIVED_AUDIO_FRAME: \r\n\r\n\t\tbreak;\r\n\tcase MSG_CODE_REQUEST_CHANGE_SETTING_SCREEN_CAPTURE: \r\n\t\tbreak;\r\n\r\n\tcase MSG_CODE_SEND_INPUT_CAPTURE:\r\n\t\tbreak;\r\n\r\n\tcase MSG_CODE_REQUEST_HUB_ITEMS: {\r\n\t\tstd::cout << \"Client send COMMAND: Request Hub Items\" << std::endl;\r\n\t\tevpp::Buffer* buf = new evpp::Buffer();\r\n\t\t// add array size\r\n\t\tu16 count = ServerConfig::Get()->HubItems.size();\r\n\t\tbuf->Append(&count, sizeof u16);\r\n\t\tfor (auto item : ServerConfig::Get()->HubItems)\r\n\t\t{\r\n\t\t\t// add \r\n\t\t\tu8 type = item->type;\r\n\t\t\tbuf->Append(&type, sizeof u8);\r\n\t\t\t// add uuid\r\n\t\t\tu16 uuidSize = item->uuid.size();\r\n\t\t\tbuf->Append(&uuidSize, sizeof u16);\r\n\t\t\tbuf->Append(item->uuid.c_str(), uuidSize);\r\n\t\t\t// add name\r\n\t\t\tu16 nameSize = item->name.size();\r\n\t\t\tbuf->Append(&nameSize, sizeof u16);\r\n\t\t\tbuf->Append(item->name.c_str(), nameSize);\r\n\t\t\t// add thumbnail if not is monitor\r\n\t\t\tif (item->type != HUB_SCREEN)\r\n\t\t\t{\r\n\t\t\t\tbuf->Append(&item->thumbSize, sizeof u32);\r\n\t\t\t\tbuf->Append(item->thumbBuf, item->thumbSize);\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\t// send and clean up\r\n\t\tsendMessageWithCodeAndData(MSG_CODE_RECEIVED_HUB_ITEMS, (u8*)buf->data(), buf->size());\r\n\t\tdelete buf;\r\n\t\tbreak;\r\n\t}\r\n\r\n\tdefault: \r\n\t\tstd::cout << \"Client \" << _connection->remote_addr() << \" send Header UNKNOW COMMAND: \" << code << std::endl;\r\n\t\tbreak;\r\n\t\r\n\t}\r\n\r\n}\r\n\r\nvoid PPClientSession::processMessageBody(u8* buffer, u8 code)\r\n{\r\n\r\n\tswitch (code)\r\n\t{\r\n\r\n\tcase MSG_CODE_REQUEST_START_SCREEN_CAPTURE:\r\n\t\tstd::cout << \"Client send COMMAND: Start Stream\" << std::endl;\r\n\t\t_server->ScreenCapturer->startStream();\r\n\t\tbreak;\r\n\tcase MSG_CODE_REQUEST_STOP_SCREEN_CAPTURE:\r\n\t\tstd::cout << \"Client send COMMAND: Stop Stream\" << std::endl;\r\n\t\t_server->ScreenCapturer->stopStream();\r\n\t\tbreak;\r\n\r\n\r\n\t\t//================================================\r\n\t\t// Delete later\r\n\t\t//================================================\r\n\tcase MSG_CODE_REQUEST_CHANGE_SETTING_SCREEN_CAPTURE: {\r\n\t\t//NOTE: do those setting are useless for now.\r\n\r\n\t\t// wait for frame\r\n\t\tint8_t waitForReceived = READ_U8(buffer, 0);\r\n\t\tbool _waitForClientReceived = !(waitForReceived == 0);\r\n\t\t// smoother frame\r\n\t\tu32 _waitForFrame = READ_U32(buffer, 1);\r\n\t\t// smoother frame\r\n\t\tu32 _outputQuality = READ_U32(buffer, 5);\r\n\t\t// smoother frame\r\n\t\tu32 _outputScale = READ_U32(buffer, 9);\r\n\t\t///-----------------------------------\r\n\t\tstd::cout << \"Change Stream Setting: \" << std::endl;\r\n\t\tstd::cout << \"Smooth Frame: \" << _waitForFrame << std::endl;\r\n\t\tstd::cout << \"Quality: \" << _outputQuality << std::endl;\r\n\t\tstd::cout << \"Scale: \" << _outputScale << std::endl;\r\n\t\t///-----------------------------------\r\n\t\t//_server->ScreenCapturer->changeSetting(_waitForClientReceived, _waitForFrame, _outputQuality, _outputScale);\r\n\t\tbreak;\r\n\t}\r\n\r\n\t//================================================\r\n\t// Input process\r\n\t//================================================\r\n\tcase MSG_CODE_SEND_INPUT_CAPTURE:\r\n\t{\r\n\t\t//--------------------------------------\r\n\t\t//read input data\r\n\t\tu32 down = 0;\r\n\t\tmemcpy(&down, buffer, 4);\r\n\t\tu32 up = 0;\r\n\t\tmemcpy(&up, buffer + 4, 4);\r\n\t\tshort cx = 0;\r\n\t\tmemcpy(&cx, buffer + 8, 2);\r\n\t\tshort cy = 0;\r\n\t\tmemcpy(&cy, buffer + 10, 2);\r\n\t\tshort ctx = 0;\r\n\t\tmemcpy(&ctx, buffer + 12, 2);\r\n\t\tshort cty = 0;\r\n\t\tmemcpy(&cty, buffer + 14, 2);\r\n\t\t//--------------------------------------\r\n\t\t_server->InputStreamer->UpdateInput(down, up, cx, cy, ctx, cty);\r\n\t\tbreak;\r\n\t}\r\n\r\n\tdefault: \r\n\t\tstd::cout << \"Client \" << _connection->remote_addr() << \" send Body UNKNOW COMMAND\" << std::endl << std::flush;\r\n\t\tbreak;\r\n\t}\r\n\r\n\r\n}"
  },
  {
    "path": "PinBoxServer/PinBoxServer/PPClientSession.h",
    "content": "#pragma once\r\n#ifndef _PP_CLIENTSESISON_H_\r\n#define _PP_CLIENTSESISON_H_\r\n\r\n#include <fstream>\r\n#include \"PPMessage.h\"\r\n// evpp\r\n#include <glog/config.h>\r\n#include <evpp/tcp_server.h>\r\n#include <evpp/buffer.h>\r\n#include <evpp/tcp_conn.h>\r\n#include \"HubItem.h\"\r\n\r\n\r\nenum PPSession_Type { PPSESSION_NONE, PPSESSION_MOVIE, PPSESSION_SCREEN_CAPTURE, PPSESSION_INPUT_CAPTURE };\r\n\r\n#define MSG_COMMAND_SIZE 9\r\n\r\n#define PPREQUEST_NONE 0\r\n#define PPREQUEST_HEADER 10\r\n#define PPREQUEST_BODY 15\r\n// authentication code\r\n#define MSG_CODE_REQUEST_AUTHENTICATION_SESSION 1\r\n\r\n#define MSG_CODE_RESULT_AUTHENTICATION_SUCCESS 5\r\n#define MSG_CODE_RESULT_AUTHENTICATION_FAILED 6\r\n// screen capture code\r\n#define MSG_CODE_REQUEST_START_SCREEN_CAPTURE 10\r\n#define MSG_CODE_REQUEST_STOP_SCREEN_CAPTURE 11\r\n#define MSG_CODE_REQUEST_CHANGE_SETTING_SCREEN_CAPTURE 12\r\n#define MSG_CODE_REQUEST_NEW_SCREEN_FRAME 15\r\n#define MSG_CODE_REQUEST_SCREEN_RECEIVED_FRAME 16\r\n#define MSG_CODE_REQUEST_NEW_AUDIO_FRAME 18\r\n#define MSG_CODE_REQUEST_RECEIVED_AUDIO_FRAME 19\r\n\r\n// input\r\n#define MSG_CODE_SEND_INPUT_CAPTURE 42\r\n#define MSG_CODE_SEND_INPUT_CAPTURE_IDLE 44\r\n\r\n#define MSG_CODE_REQUEST_HUB_ITEMS 60\r\n#define MSG_CODE_RECEIVED_HUB_ITEMS 61\r\n\r\n// 8Mb default size just enough\r\n#define BUFFER_SIZE 1024 * 1024 * 8 \r\n#define CHOP_N(BUFFER, COUNT, LENGTH) memmove(BUFFER, BUFFER + COUNT, LENGTH)\r\n\r\nclass PPServer;\r\nclass PPClientSession\r\n{\r\nprivate:\r\n\tevpp::TCPConnPtr\t\t\t_connection;\r\n\tbool\t\t\t\t\t\t_authenticated = false;\r\n\r\n\tu8*\t\t\t\t\t\t\t_receivedBuffer = nullptr;\r\n\tu32\t\t\t\t\t\t\t_receivedBufferSize = 0;\r\n\tu32\t\t\t\t\t\t\t_waitForSize = 0;\r\n\tPPMessage*\t\t\t\t\t_tmpMessage = nullptr;\r\n\tu8\t\t\t\t\t\t\t_currentReadState = PPREQUEST_NONE;\r\n\r\n\r\n\r\nprivate:\r\n\tvoid\t\t\t\t\t\tsendMessageWithCode(u8 code);\r\n\tvoid\t\t\t\t\t\tsendMessageWithCodeAndData(u8 code, u8* buffer, size_t bufferSize);\r\n\tvoid\t\t\t\t\t\tprocessMessageHeader(u8 code);\r\n\tvoid\t\t\t\t\t\tprocessMessageBody(u8* buffer, u8 code);\r\n\r\n\tvoid\t\t\t\t\t\tProcessIncommingMessage(u8* buffer, u32 size);\r\npublic:\r\n\tPPServer*\t\t\t\t\t_server;\r\n\r\n\tvoid\t\t\t\t\t\tInitSession(evpp::TCPConnPtr conn, PPServer* parent);\r\n\tvoid\t\t\t\t\t\tProcessMessage(evpp::Buffer* msg);\r\n\tvoid\t\t\t\t\t\tDisconnectFromServer();\r\n\r\n\tbool\t\t\t\t\t\tIsAuthenticated() const { return _authenticated; }\r\n\r\n\tvoid\t\t\t\t\t\tPrepareVideoPacketAndSend(u8* buffer, int bufferSize);\r\n\tvoid\t\t\t\t\t\tPrepareAudioPacketAndSend(u8* buffer, int bufferSize, uint64_t pts);\r\n};\r\n\r\n#endif"
  },
  {
    "path": "PinBoxServer/PinBoxServer/PPMessage.cpp",
    "content": "#include \"stdafx.h\"\r\n#include \"PPMessage.h\"\r\n\r\n\r\nPPMessage::~PPMessage()\r\n{\r\n\tif (g_content != nullptr)\r\n\t\tfree(g_content);\r\n}\r\n\r\nu8* PPMessage::BuildMessage(u8* contentBuffer, u32 contentSize)\r\n{\r\n\r\n\tg_contentSize = contentSize;\r\n\t//-----------------------------------------------\r\n\t// alloc msg buffer block\r\n\tu8* msgBuffer = (u8*)malloc(sizeof(u8) * (contentSize + 9));\r\n\t//-----------------------------------------------\r\n\t// build header\r\n\tu8* pointer = msgBuffer;\r\n\t// 1, validate code\r\n\tWRITE_CHAR_PTR(pointer, g_validateCode, 4);\r\n\t// 2, message code\r\n\tWRITE_U8(pointer, g_code);\r\n\t// 3, content size\r\n\tWRITE_U32(pointer, g_contentSize);\r\n\r\n\t//TODO: corrupted when save g_contentSize ??\r\n\t//-----------------------------------------------\r\n\t// build content data\r\n\tif (g_contentSize > 0) {\r\n\t\tmemcpy(msgBuffer+9, contentBuffer, contentSize);\r\n\t}\r\n\t//-----------------------------------------------\r\n\treturn msgBuffer;\r\n}\r\n\r\nu8* PPMessage::BuildMessageEmpty()\r\n{\r\n\treturn BuildMessage(nullptr, 0);\r\n}\r\n\r\nvoid PPMessage::BuildMessageHeader(u8 code)\r\n{\r\n\tg_code = code;\r\n}\r\n\r\nbool PPMessage::ParseHeader(u8* buffer)\r\n{\r\n\t\r\n\tif (IS_INVALID_CODE(buffer, 0))\r\n\t{\r\n\t\tprintf(\"Parse header failed. Validate code is incorrect \\n\");\r\n\t\treturn false;\r\n\t}\r\n\tsize_t readIndex = 4;\r\n\tg_code = READ_U8(buffer, readIndex); readIndex += 1;\r\n\tg_contentSize = READ_U32(buffer, readIndex); readIndex += 4;\r\n\treturn true;\r\n}\r\n\r\nvoid PPMessage::ClearHeader()\r\n{\r\n\tg_code = 0;\r\n\tg_contentSize = 0;\r\n}\r\n"
  },
  {
    "path": "PinBoxServer/PinBoxServer/PPMessage.h",
    "content": "#pragma once\r\n#ifndef _PP_MESSAGE_H_\r\n#define _PP_MESSAGE_H_\r\n\r\n#include <cstdint>\r\n#include <cstring>\r\n#include <cstdlib>\r\n#include <cstdio>\r\n\r\ntypedef unsigned char      u8;\r\ntypedef unsigned short     u16;\r\ntypedef unsigned int       u32;\r\n\r\n#define WRITE_CHAR_PTR(BUFFER, DATA, SIZE) memcpy(BUFFER, DATA, SIZE); BUFFER += SIZE;\r\n#define WRITE_U8(BUFFER, DATA) *(BUFFER++) = DATA;\r\n#define WRITE_U16(BUFFER, DATA) *(BUFFER++) = DATA; *(BUFFER++) = DATA >> 8;\r\n#define WRITE_U32(BUFFER, DATA) *(BUFFER++) = DATA; *(BUFFER++) = DATA >> 8; *(BUFFER++) = DATA >> 16; *(BUFFER++) = DATA >> 24;\r\n#define WRITE_U64(BUFFER, DATA) *(BUFFER++) = DATA; *(BUFFER++) = DATA >> 8; *(BUFFER++) = DATA >> 16; *(BUFFER++) = DATA >> 24;*(BUFFER++) = DATA >> 32;*(BUFFER++) = DATA >> 40;*(BUFFER++) = DATA >> 48;*(BUFFER++) = DATA >> 56;\r\n#define READ_U8(BUFFER, INDEX) BUFFER[INDEX];\r\n#define READ_U16(BUFFER, INDEX) BUFFER[INDEX] | BUFFER[INDEX + 1] << 8;\r\n#define READ_U32(BUFFER, INDEX) BUFFER[INDEX] | BUFFER[INDEX + 1] << 8 | BUFFER[INDEX + 2] << 16 | BUFFER[INDEX + 3] << 24;\r\n#define READ_U32(BUFFER, INDEX) BUFFER[INDEX] | BUFFER[INDEX + 1] << 8 | BUFFER[INDEX + 2] << 16 | BUFFER[INDEX + 3] << 24 | BUFFER[INDEX + 3] << 32 | BUFFER[INDEX + 3] << 40 | BUFFER[INDEX + 3] << 48 | BUFFER[INDEX + 3] << 56;\r\n#define IS_INVALID_CODE(BUFFER, INDEX) BUFFER[INDEX] != 'P' || BUFFER[INDEX+1] != 'P' || BUFFER[INDEX+2] != 'B' || BUFFER[INDEX+3] != 'X'\r\n\r\nclass PPMessage\r\n{\r\nprivate:\r\n\t//-----------------------------------------------\r\n\t// message header - 9 bytes\r\n\t// [4b] validate code : \"PPBX\"\r\n\t// [1b] message code : 0 to 255 - define message type\r\n\t// [4b] message content size \r\n\t//-----------------------------------------------\r\n\tconst char\t\t\t\t\t\t\tg_validateCode[4] = { 'P','P','B','X' };\r\n\tu8\t\t\t\t\t\t\t\t\tg_code = 0;\r\n\tu32\t\t\t\t\t\t\t\t\tg_contentSize = 0;\r\n\r\n\t//-----------------------------------------------\r\n\t// message content\r\n\t//-----------------------------------------------\r\n\tu8*\t\t\t\t\t\t\t\t\tg_content = nullptr;\r\npublic:\r\n\t~PPMessage();\r\n\tu32\t\t\t\t\t\t\t\t\tGetMessageSize() const { return g_contentSize + 9; }\r\n\tu8*\t\t\t\t\t\t\t\t\tGetMessageContent() { return g_content; }\r\n\tu32\t\t\t\t\t\t\t\t\tGetContentSize() { return g_contentSize; }\r\n\tu8\t\t\t\t\t\t\t\t\tGetMessageCode() { return g_code; }\r\n\t//-----------------------------------------------\r\n\t// NOTE: after using this message, returned data must be free whenever it sent.\r\n\t//-----------------------------------------------\r\n\tu8*\t\t\t\t\t\t\t\t\tBuildMessage(u8* contentBuffer, u32 contentSize);\r\n\tu8*\t\t\t\t\t\t\t\t\tBuildMessageEmpty();\r\n\tvoid\t\t\t\t\t\t\t\tBuildMessageHeader(u8 code);\r\n\r\n\tbool\t\t\t\t\t\t\t\tParseHeader(u8* buffer);\r\n\tvoid\t\t\t\t\t\t\t\tClearHeader();\r\n};\r\n\r\n#endif"
  },
  {
    "path": "PinBoxServer/PinBoxServer/PPServer.cpp",
    "content": "#include \"stdafx.h\"\r\n#include \"PPServer.h\"\r\n#include \"ServerConfig.h\"\r\n#include <string>\r\n\r\nvoid PPServer::PrintIPAddressList()\r\n{\r\n\t// Get local host name\r\n\tchar szHostName[128] = \"\";\r\n\r\n\tif (::gethostname(szHostName, sizeof(szHostName)))\r\n\t{\r\n\t\t// Error handling -> call 'WSAGetLastError()'\r\n\t}\r\n\t// Get local IP addresses\r\n\tstruct sockaddr_in SocketAddress;\r\n\tstruct hostent* pHost = 0;\r\n\tpHost = ::gethostbyname(szHostName);\r\n\tif (!pHost)\r\n\t{\r\n\t\t// Error handling -> call 'WSAGetLastError()'\r\n\t}\r\n\tchar aszIPAddresses[10][16]; // maximum of ten IP addresses\r\n\tfor (int iCnt = 0; ((pHost->h_addr_list[iCnt]) && (iCnt < 10)); ++iCnt)\r\n\t{\r\n\t\tmemcpy(&SocketAddress.sin_addr, pHost->h_addr_list[iCnt], pHost->h_length);\r\n\t\tstrcpy(aszIPAddresses[iCnt], inet_ntoa(SocketAddress.sin_addr));\r\n\r\n\t\tstd::cout << \"IP: \" << aszIPAddresses[iCnt] << std::endl << std::flush;\r\n\t}\r\n}\r\n\r\n\r\n\r\nvoid PPServer::InitServer()\r\n{\r\n\tgoogle::InitGoogleLogging(\"PinBoxServer\");\r\n\tgoogle::SetCommandLineOption(\"GLOG_minloglevel\", \"2\");\r\n\r\n\t//===========================================================================\r\n\t// Init config\r\n\t//===========================================================================\r\n\tint cfgPort = ServerConfig::Get()->ServerPort;\r\n\tint cfgThreadNum = ServerConfig::Get()->NetworkThread;\r\n\tScreenCapturer = new ScreenCaptureSession();\r\n\tScreenCapturer->setParent(this);\r\n\tInputStreamer = new InputStreamSession();\r\n\r\n\r\n\t//===========================================================================\r\n\t// Socket server part\r\n\t//===========================================================================\r\n\tstd::string addr = \"0.0.0.0:\" + std::to_string(cfgPort);\r\n\tstd::cout << \"Running on address: \" << addr << std::endl << std::flush;\r\n\tevpp::EventLoop loop;\r\n\tevpp::TCPServer server(&loop, addr, \"PinBoxServer\", cfgThreadNum);\r\n\r\n\r\n\t//===========================================================================\r\n\tserver.SetMessageCallback([&](const evpp::TCPConnPtr& conn, evpp::Buffer* msg)\r\n\t{\r\n\t\tPPClientSession* session = evpp::any_cast<PPClientSession*>(conn->context());\r\n\t\tif (session != nullptr) session->ProcessMessage(msg);\r\n\t\tmsg->Reset();\r\n\t});\r\n\r\n\r\n\t//===========================================================================\r\n\tserver.SetConnectionCallback([&](const evpp::TCPConnPtr& conn)\r\n\t{\r\n\t\tstd::string addrID = conn->remote_addr();\r\n\t\tif(conn->IsConnected())\r\n\t\t{\r\n\t\t\t// new client is connected\r\n\t\t\tstd::cout << \"Client: \" << addrID << \" connected to server!\" << std::endl << std::flush;\r\n\t\t\tPPClientSession* session = new PPClientSession();\r\n\t\t\tsession->InitSession(conn, this);\r\n\t\t\tevpp::Any context(session);\r\n\t\t\tconn->set_context(context);\r\n\r\n\t\t}else {\r\n\t\t\t// client is disconnected\r\n\t\t\tevpp::Any sessionAny = conn->context();\r\n\t\t\tPPClientSession* session = evpp::any_cast<PPClientSession*>(sessionAny);\r\n\t\t\tif(session != nullptr)\r\n\t\t\t{\r\n\r\n\t\t\t\tstd::cout << \"Client: \" << addrID << \" disconnected to server!\" << std::endl << std::flush;\r\n\t\t\t\tsession->DisconnectFromServer();\r\n\t\t\t}\r\n\t\t}\r\n\t});\r\n\tserver.Init();\r\n\tserver.Start();\r\n\t//===========================================================================\r\n\t// print screen\r\n\t//===========================================================================\r\n\tstd::cout << \"Init Server : Successfully\" << std::endl << std::flush;\r\n\tstd::cout << \"-------------------------------------------\" << std::endl << std::flush;\r\n\tstd::cout << \"Please use one of those IP in your 3DS client to connect to server:\" << std::endl << std::flush;\r\n\tstd::cout << \"(normally it should be the last one)\" << std::endl << std::flush;\r\n\tstd::cout << \"-------------------------------------------\" << std::endl << std::flush;\r\n\tPrintIPAddressList();\r\n\tstd::cout << \"-------------------------------------------\" << std::endl << std::flush;\r\n\tstd::cout << \"Wait for connection...\" << std::endl << std::flush;\r\n\t//===========================================================================\r\n\tloop.Run();\r\n}\r\n"
  },
  {
    "path": "PinBoxServer/PinBoxServer/PPServer.h",
    "content": "#pragma once\r\n\r\n#ifndef _PP_SERVER_H__\r\n#define _PP_SERVER_H__\r\n\r\n// socket\r\n#include <winsock2.h>\r\n#include <winsock.h>\r\n#include <ostream>\r\n#include <iostream>\r\n\r\n// evpp\r\n#include <glog/config.h>\r\n#include <evpp/tcp_server.h>\r\n#include <evpp/buffer.h>\r\n#include <evpp/tcp_conn.h>\r\n\r\n#include \"PPClientSession.h\"\r\n#include \"ScreenCaptureSession.h\"\r\n#include \"InputStreamSession.h\"\r\n\r\nclass PPServer\r\n{\r\n\r\n\t\r\npublic:\r\n\tstatic void PrintIPAddressList();\r\npublic:\r\n\tScreenCaptureSession*\t\t\t\t\t\t\tScreenCapturer;\r\n\tInputStreamSession*\t\t\t\t\t\t\t\tInputStreamer;\r\n\r\n\tvoid InitServer();\r\n};\r\n\r\n\r\n#endif"
  },
  {
    "path": "PinBoxServer/PinBoxServer/PinBoxServer.cpp",
    "content": "// PinBoxServer.cpp : Defines the entry point for the console application.\r\n//\r\n\r\n#include \"stdafx.h\"\r\n#include \"PPServer.h\"\r\n\r\nint main(int argc, char *argv[])\r\n{\r\n\tPPServer *server = new PPServer();\r\n\tserver->InitServer();\r\n\r\n\treturn 0;\r\n}\r\n\r\n#include \"winmain-inl.h\""
  },
  {
    "path": "PinBoxServer/PinBoxServer/PinBoxServer.vcxproj",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\r\n<Project DefaultTargets=\"Build\" ToolsVersion=\"14.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\r\n  <ItemGroup Label=\"ProjectConfigurations\">\r\n    <ProjectConfiguration Include=\"Debug|Win32\">\r\n      <Configuration>Debug</Configuration>\r\n      <Platform>Win32</Platform>\r\n    </ProjectConfiguration>\r\n    <ProjectConfiguration Include=\"Release|Win32\">\r\n      <Configuration>Release</Configuration>\r\n      <Platform>Win32</Platform>\r\n    </ProjectConfiguration>\r\n    <ProjectConfiguration Include=\"Debug|x64\">\r\n      <Configuration>Debug</Configuration>\r\n      <Platform>x64</Platform>\r\n    </ProjectConfiguration>\r\n    <ProjectConfiguration Include=\"Release|x64\">\r\n      <Configuration>Release</Configuration>\r\n      <Platform>x64</Platform>\r\n    </ProjectConfiguration>\r\n  </ItemGroup>\r\n  <PropertyGroup Label=\"Globals\">\r\n    <ProjectGuid>{944CB352-A2A2-4B65-AB3B-18566D0ABFA2}</ProjectGuid>\r\n    <Keyword>Win32Proj</Keyword>\r\n    <RootNamespace>PinBoxServer</RootNamespace>\r\n    <WindowsTargetPlatformVersion>10.0.10586.0</WindowsTargetPlatformVersion>\r\n  </PropertyGroup>\r\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.Default.props\" />\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\" Label=\"Configuration\">\r\n    <ConfigurationType>Application</ConfigurationType>\r\n    <UseDebugLibraries>true</UseDebugLibraries>\r\n    <PlatformToolset>v140</PlatformToolset>\r\n    <CharacterSet>Unicode</CharacterSet>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\" Label=\"Configuration\">\r\n    <ConfigurationType>Application</ConfigurationType>\r\n    <UseDebugLibraries>false</UseDebugLibraries>\r\n    <PlatformToolset>v140</PlatformToolset>\r\n    <WholeProgramOptimization>true</WholeProgramOptimization>\r\n    <CharacterSet>Unicode</CharacterSet>\r\n    <UseOfMfc>false</UseOfMfc>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\" Label=\"Configuration\">\r\n    <ConfigurationType>Application</ConfigurationType>\r\n    <UseDebugLibraries>true</UseDebugLibraries>\r\n    <PlatformToolset>v140</PlatformToolset>\r\n    <CharacterSet>Unicode</CharacterSet>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\" Label=\"Configuration\">\r\n    <ConfigurationType>Application</ConfigurationType>\r\n    <UseDebugLibraries>false</UseDebugLibraries>\r\n    <PlatformToolset>v140</PlatformToolset>\r\n    <WholeProgramOptimization>true</WholeProgramOptimization>\r\n    <CharacterSet>Unicode</CharacterSet>\r\n  </PropertyGroup>\r\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.props\" />\r\n  <ImportGroup Label=\"ExtensionSettings\">\r\n  </ImportGroup>\r\n  <ImportGroup Label=\"Shared\">\r\n  </ImportGroup>\r\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\r\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\r\n  </ImportGroup>\r\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\r\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\r\n  </ImportGroup>\r\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\r\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\r\n  </ImportGroup>\r\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\r\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\r\n  </ImportGroup>\r\n  <PropertyGroup Label=\"UserMacros\" />\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\r\n    <LinkIncremental>true</LinkIncremental>\r\n    <IncludePath>$(ProjectDir)\\..\\..\\ThirdParty\\screen_capture_lite\\include;$(ProjectDir)\\..\\..\\ThirdParty\\opus-1.2.1\\include;$(ProjectDir)\\..\\..\\ThirdParty\\libopusenc\\include;$(ProjectDir)\\..\\..\\ThirdParty\\FakeInput\\src;$(ProjectDir)\\..\\..\\ThirdParty\\ViGEm\\Include;$(ProjectDir)\\..\\..\\ThirdParty\\ffmpeg\\include;$(IncludePath)</IncludePath>\r\n    <LibraryPath>$(ProjectDir)\\..\\..\\ThirdParty\\screen_capture_lite\\lib\\$(Configuration)\\;$(ProjectDir)\\..\\..\\ThirdParty\\opus-1.2.1\\win32\\VS2015\\Win32\\$(Configuration)\\;$(ProjectDir)\\..\\..\\ThirdParty\\libopusenc\\win32\\opusenc\\$(Configuration)\\;$(ProjectDir)\\..\\..\\ThirdParty\\FakeInput\\bin\\lib\\$(Configuration)\\;$(ProjectDir)\\..\\..\\ThirdParty\\ViGEm\\Debug (dynamic);$(ProjectDir)\\..\\..\\ThirdParty\\ffmpeg\\lib;$(LibraryPath)</LibraryPath>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\r\n    <LinkIncremental>true</LinkIncremental>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\r\n    <LinkIncremental>false</LinkIncremental>\r\n    <IncludePath>$(ProjectDir)\\..\\..\\ThirdParty\\screen_capture_lite\\include;$(ProjectDir)\\..\\..\\ThirdParty\\opus-1.2.1\\include;$(ProjectDir)\\..\\..\\ThirdParty\\libopusenc\\include;$(ProjectDir)\\..\\..\\ThirdParty\\FakeInput\\src;$(ProjectDir)\\..\\..\\ThirdParty\\ViGEm\\Include;$(ProjectDir)\\..\\..\\ThirdParty\\ffmpeg\\include;$(IncludePath)</IncludePath>\r\n    <LibraryPath>$(ProjectDir)\\..\\..\\ThirdParty\\screen_capture_lite\\lib\\$(Configuration)\\;$(ProjectDir)\\..\\..\\ThirdParty\\opus-1.2.1\\win32\\VS2015\\Win32\\$(Configuration)\\;$(ProjectDir)\\..\\..\\ThirdParty\\libopusenc\\win32\\opusenc\\$(Configuration)\\;$(ProjectDir)\\..\\..\\ThirdParty\\FakeInput\\bin\\lib\\$(Configuration)\\;$(ProjectDir)\\..\\..\\ThirdParty\\ViGEm\\Release (dynamic);$(ProjectDir)\\..\\..\\ThirdParty\\ffmpeg\\lib;$(LibraryPath)</LibraryPath>\r\n    <EnableManagedIncrementalBuild>true</EnableManagedIncrementalBuild>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\r\n    <LinkIncremental>false</LinkIncremental>\r\n  </PropertyGroup>\r\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\r\n    <ClCompile>\r\n      <PrecompiledHeader>Use</PrecompiledHeader>\r\n      <WarningLevel>Level3</WarningLevel>\r\n      <Optimization>Disabled</Optimization>\r\n      <PreprocessorDefinitions>WIN32;_DEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>\r\n      <SDLCheck>true</SDLCheck>\r\n    </ClCompile>\r\n    <Link>\r\n      <SubSystem>NotSet</SubSystem>\r\n      <GenerateDebugInformation>true</GenerateDebugInformation>\r\n      <AdditionalDependencies>screen_capture_lite.lib;fakeInput.lib;opus.lib;opusenc.lib;winmm.lib;ViGEmClient.lib;Dwmapi.lib;avcodec.lib;avformat.lib;avutil.lib;avdevice.lib;avfilter.lib;swresample.lib;swscale.lib;postproc.lib;%(AdditionalDependencies)</AdditionalDependencies>\r\n    </Link>\r\n    <PostBuildEvent>\r\n      <Command>\r\n      </Command>\r\n    </PostBuildEvent>\r\n  </ItemDefinitionGroup>\r\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\r\n    <ClCompile>\r\n      <PrecompiledHeader>Use</PrecompiledHeader>\r\n      <WarningLevel>Level3</WarningLevel>\r\n      <Optimization>Disabled</Optimization>\r\n      <PreprocessorDefinitions>_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\r\n      <SDLCheck>true</SDLCheck>\r\n    </ClCompile>\r\n    <Link>\r\n      <SubSystem>Console</SubSystem>\r\n      <GenerateDebugInformation>true</GenerateDebugInformation>\r\n    </Link>\r\n  </ItemDefinitionGroup>\r\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\r\n    <ClCompile>\r\n      <WarningLevel>Level3</WarningLevel>\r\n      <PrecompiledHeader>Use</PrecompiledHeader>\r\n      <Optimization>MaxSpeed</Optimization>\r\n      <FunctionLevelLinking>true</FunctionLevelLinking>\r\n      <IntrinsicFunctions>true</IntrinsicFunctions>\r\n      <PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\r\n      <SDLCheck>true</SDLCheck>\r\n    </ClCompile>\r\n    <Link>\r\n      <SubSystem>Console</SubSystem>\r\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\r\n      <OptimizeReferences>true</OptimizeReferences>\r\n      <GenerateDebugInformation>true</GenerateDebugInformation>\r\n      <AdditionalDependencies>screen_capture_lite.lib;fakeInput.lib;opus.lib;opusenc.lib;winmm.lib;ViGEmClient.lib;Dwmapi.lib;avcodec.lib;avformat.lib;avutil.lib;avdevice.lib;avfilter.lib;swresample.lib;swscale.lib;postproc.lib;%(AdditionalDependencies)</AdditionalDependencies>\r\n      <ImageHasSafeExceptionHandlers>false</ImageHasSafeExceptionHandlers>\r\n    </Link>\r\n    <PostBuildEvent>\r\n      <Command>\r\n      </Command>\r\n    </PostBuildEvent>\r\n  </ItemDefinitionGroup>\r\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\r\n    <ClCompile>\r\n      <WarningLevel>Level3</WarningLevel>\r\n      <PrecompiledHeader>Use</PrecompiledHeader>\r\n      <Optimization>MaxSpeed</Optimization>\r\n      <FunctionLevelLinking>true</FunctionLevelLinking>\r\n      <IntrinsicFunctions>true</IntrinsicFunctions>\r\n      <PreprocessorDefinitions>NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\r\n      <SDLCheck>true</SDLCheck>\r\n    </ClCompile>\r\n    <Link>\r\n      <SubSystem>Console</SubSystem>\r\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\r\n      <OptimizeReferences>true</OptimizeReferences>\r\n      <GenerateDebugInformation>true</GenerateDebugInformation>\r\n    </Link>\r\n  </ItemDefinitionGroup>\r\n  <ItemGroup>\r\n    <ClInclude Include=\"AudioStreamSession.h\" />\r\n    <ClInclude Include=\"const.h\" />\r\n    <ClInclude Include=\"HubItem.h\" />\r\n    <ClInclude Include=\"InputStreamSession.h\" />\r\n    <ClInclude Include=\"PPClientSession.h\" />\r\n    <ClInclude Include=\"PPMessage.h\" />\r\n    <ClInclude Include=\"PPServer.h\" />\r\n    <ClInclude Include=\"ScreenCaptureSession.h\" />\r\n    <ClInclude Include=\"ServerConfig.h\" />\r\n    <ClInclude Include=\"stdafx.h\" />\r\n    <ClInclude Include=\"targetver.h\" />\r\n    <ClInclude Include=\"winmain-inl.h\" />\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <ClCompile Include=\"AudioStreamSession.cpp\" />\r\n    <ClCompile Include=\"InputStreamSession.cpp\" />\r\n    <ClCompile Include=\"PinBoxServer.cpp\" />\r\n    <ClCompile Include=\"PPClientSession.cpp\" />\r\n    <ClCompile Include=\"PPMessage.cpp\" />\r\n    <ClCompile Include=\"PPServer.cpp\" />\r\n    <ClCompile Include=\"ScreenCaptureSession.cpp\" />\r\n    <ClCompile Include=\"ServerConfig.cpp\" />\r\n    <ClCompile Include=\"stdafx.cpp\">\r\n      <PrecompiledHeader Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">Create</PrecompiledHeader>\r\n      <PrecompiledHeader Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">Create</PrecompiledHeader>\r\n      <PrecompiledHeader Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">Create</PrecompiledHeader>\r\n      <PrecompiledHeader Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">Create</PrecompiledHeader>\r\n    </ClCompile>\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <Text Include=\"input.cfg\">\r\n      <FileType>Text</FileType>\r\n      <ExcludedFromBuild Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">false</ExcludedFromBuild>\r\n      <DeploymentContent Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">true</DeploymentContent>\r\n    </Text>\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <None Include=\"hub.cfg\" />\r\n    <None Include=\"server.cfg\" />\r\n  </ItemGroup>\r\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.targets\" />\r\n  <ImportGroup Label=\"ExtensionTargets\">\r\n  </ImportGroup>\r\n</Project>"
  },
  {
    "path": "PinBoxServer/PinBoxServer/PinBoxServer.vcxproj.filters",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\r\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\r\n  <ItemGroup>\r\n    <Filter Include=\"Source Files\">\r\n      <UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>\r\n      <Extensions>cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx</Extensions>\r\n    </Filter>\r\n    <Filter Include=\"Header Files\">\r\n      <UniqueIdentifier>{93995380-89BD-4b04-88EB-625FBE52EBFB}</UniqueIdentifier>\r\n      <Extensions>h;hh;hpp;hxx;hm;inl;inc;xsd</Extensions>\r\n    </Filter>\r\n    <Filter Include=\"Resource Files\">\r\n      <UniqueIdentifier>{67DA6AB6-F800-4c08-8B7A-83BB121AAD01}</UniqueIdentifier>\r\n      <Extensions>rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms</Extensions>\r\n    </Filter>\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <Text Include=\"input.cfg\">\r\n      <Filter>Resource Files</Filter>\r\n    </Text>\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <ClInclude Include=\"stdafx.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"targetver.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"PPClientSession.h\">\r\n      <Filter>Source Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"PPMessage.h\">\r\n      <Filter>Source Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"PPServer.h\">\r\n      <Filter>Source Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"winmain-inl.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"ScreenCaptureSession.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"InputStreamSession.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"ServerConfig.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"AudioStreamSession.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"const.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"HubItem.h\">\r\n      <Filter>Header Files</Filter>\r\n    </ClInclude>\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <ClCompile Include=\"stdafx.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"PinBoxServer.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"PPClientSession.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"PPMessage.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"PPServer.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"AudioStreamSession.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"ScreenCaptureSession.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"InputStreamSession.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"ServerConfig.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <None Include=\"server.cfg\">\r\n      <Filter>Resource Files</Filter>\r\n    </None>\r\n    <None Include=\"hub.cfg\">\r\n      <Filter>Resource Files</Filter>\r\n    </None>\r\n  </ItemGroup>\r\n</Project>"
  },
  {
    "path": "PinBoxServer/PinBoxServer/ScreenCaptureSession.cpp",
    "content": "#include \"stdafx.h\"\r\n\r\n#include \"ScreenCaptureSession.h\"\r\n\r\n#include \"PPServer.h\"\r\n#include \"ServerConfig.h\"\r\n#include <locale>\r\n\r\n#define ERROR_PRINT(code) printError(code)\r\nstatic void printError(int errorCode)\r\n{\r\n\tif (errorCode >= 0) return;\r\n\tchar log[AV_ERROR_MAX_STRING_SIZE]{ 0 };\r\n\tstd::cout << \"Error: \" << av_strerror(errorCode, log, AV_ERROR_MAX_STRING_SIZE) << std::endl;\r\n}\r\n\r\n//=========================================================\r\n// FPS counter\r\n//=========================================================\r\nFPSCounter vFPS;\r\nFPSCounter captureFPS;\r\n\r\n//static FILE* testAudioOutFLAC;\r\n\r\n\r\n\r\n// test\r\nint64_t src_ch_layout = AV_CH_LAYOUT_STEREO, dst_ch_layout = AV_CH_LAYOUT_STEREO;\r\nint src_rate = 48000, dst_rate = 22050;\r\nuint8_t **src_data = NULL;\r\nint src_nb_channels = 2, dst_nb_channels = 2;\r\nint src_linesize, dst_linesize;\r\nenum AVSampleFormat src_sample_fmt = AV_SAMPLE_FMT_S16, dst_sample_fmt = AV_SAMPLE_FMT_S16;\r\nstruct SwrContext *swr_ctx;\r\n\r\n\r\n\r\nScreenCaptureSession::ScreenCaptureSession()\r\n{\r\n}\r\n\r\n\r\nScreenCaptureSession::~ScreenCaptureSession()\r\n{\r\n\t// destroy screen capture\r\n\tm_frameGrabber->pause();\r\n\tm_frameGrabber = nullptr;\r\n\r\n\treleaseEncoder();\r\n\r\n\t//fclose(testAudioOutFLAC);\r\n}\r\n\r\n\r\n\r\nvoid ScreenCaptureSession::initScreenCapture()\r\n{\r\n\tif (Initialized) return;\r\n\tif (!m_server) return;\r\n\r\n\tInitialized = true;\r\n\tmFrameRate = ServerConfig::Get()->CaptureFPS;\r\n\t//---------------------------------------------------------------------\r\n\tauto mons = SL::Screen_Capture::GetMonitors();\r\n\tstd::vector<SL::Screen_Capture::Monitor> selectedMonitor = std::vector<SL::Screen_Capture::Monitor>();\r\n\tSL::Screen_Capture::Monitor monitor = mons[ServerConfig::Get()->MonitorIndex];\r\n\tselectedMonitor.push_back(monitor);\r\n\r\n\t//---------------------------------------------------------------------\r\n\tm_frameGrabber = nullptr;\r\n\tm_frameGrabber = SL::Screen_Capture::CreateCaptureConfiguration([&]()\r\n\t{\r\n\t\treturn selectedMonitor;\r\n\t})->onNewFrame([&](const SL::Screen_Capture::Image& img, const SL::Screen_Capture::Monitor& monitor)\r\n\t{\r\n\t\tif (!mInitializedCodec) return;\r\n\t\t//if (!m_isStartStreaming || m_clientSession == nullptr) return;\r\n\r\n\t\tauto size = Width(img) * Height(img) * sizeof(SL::Screen_Capture::ImageBGRA);\r\n\t\tauto imgbuffer(std::make_unique<unsigned char[]>(size));\r\n\t\tExtract(img, imgbuffer.get(), size);\r\n\r\n\t\tint totalSize = 0;\r\n\t\tif (mLastFrameData == nullptr) {\r\n\t\t\tmLastFrameData = new FrameData();\r\n\t\t\tmLastFrameData->Width = Width(img);\r\n\t\t\tmLastFrameData->Height = Height(img);\r\n\t\t\tmLastFrameData->StrideWidth = mLastFrameData->Width * 4;\r\n\t\t\ttotalSize = mLastFrameData->StrideWidth * mLastFrameData->Height;\r\n\t\t}else\r\n\t\t{\r\n\t\t\ttotalSize = mLastFrameData->StrideWidth * mLastFrameData->Height;\r\n\t\t}\r\n\r\n\t\tencodeAudioFrame();\r\n\r\n\t\t// encode video\r\n\t\tencodeVideoFrame((u8*)imgbuffer.get());\r\n\r\n\t})->start_capturing();\r\n\tint timeDelay = 1000.0f / (float)mFrameRate;\r\n\tm_frameGrabber->setFrameChangeInterval(std::chrono::milliseconds(timeDelay));\r\n\r\n\t//-----------------------------------------------------\r\n\t// decoder\r\n\tm_audioGrabber = new AudioStreamSession();\r\n\tm_audioGrabber->StartAudioStream();\r\n\r\n\t//-----------------------------------------------------\r\n\t// decoder\r\n\tiSourceWidth = monitor.Width;\r\n\tiSourceHeight = monitor.Height;\r\n\tinitEncoder();\r\n}\r\n\r\nvoid ScreenCaptureSession::encodeVideoFrame(u8* buf)\r\n{\r\n\t// encode\r\n\tconst unsigned char *inData[1] = { buf };\r\n\tint inLinesize[1] = { mLastFrameData->StrideWidth };\r\n\tERROR_PRINT(sws_scale(pVideoScaler, inData, inLinesize, 0, mLastFrameData->Height, pVideoFrame->data, pVideoFrame->linesize));\r\n\tpVideoFrame->pts = iVideoFrameIndex;\r\n\tiVideoFrameIndex++;\r\n\tint ret = avcodec_send_frame(pVideoContext, pVideoFrame);\r\n\tERROR_PRINT(ret);\r\n\t//======================================================\r\n\tret = avcodec_receive_packet(pVideoContext, pVideoPacket);\r\n\tERROR_PRINT(ret);\r\n\tif (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) return;\r\n\tif (ret < 0) return;\r\n\t//======================================================\r\n\tif(m_clientSession != nullptr) m_clientSession->PrepareVideoPacketAndSend(pVideoPacket->data, pVideoPacket->size);\r\n\r\n\t//======================================================\r\n\t// FPS sent\r\n\t//======================================================\r\n\tvFPS.onNewFramecounter++;\r\n\tif (std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::high_resolution_clock::now() - vFPS.onNewFramestart).count() >= 1000) {\r\n\t\tvFPS.currentFPS = vFPS.onNewFramecounter;\r\n\t\tvFPS.onNewFramecounter = 0;\r\n\t\tvFPS.onNewFramestart = std::chrono::high_resolution_clock::now();\r\n\t}\r\n\t//======================================================\r\n\tav_packet_unref(pVideoPacket);\r\n}\r\n\r\nvoid ScreenCaptureSession::encodeAudioFrame()\r\n{\r\n\tfloat factor = (float)src_rate / (float)dst_rate;\r\n\r\n\tu32 size = m_audioGrabber->audioBufferSize;\r\n\r\n\tconst u32 frameSize = pAudioFrame->nb_samples * factor;\r\n\tconst u32 readSize = frameSize * 4;\r\n\r\n\twhile (size >= readSize)\r\n\t{\r\n\t\tint ret = av_frame_make_writable(pAudioFrame);\r\n\t\tif (ret < 0)\r\n\t\t{\r\n\t\t\tfprintf(stderr, \"Fail to init Audio frame\\n\");\r\n\t\t\tbreak;\r\n\t\t}\r\n\r\n\t\tm_audioGrabber->ReadFromBuffer(src_data[0], readSize);\r\n\r\n\t\t/* convert to destination format */\r\n\t\tret = swr_convert(swr_ctx, pAudioFrame->data, pAudioFrame->nb_samples, (const uint8_t **)src_data, frameSize);\r\n\t\tif (ret < 0) {\r\n\t\t\tfprintf(stderr, \"Error while converting\\n\");\r\n\t\t\tbreak;\r\n\t\t}\r\n\t\tpAudioFrame->pts = iAudioPts;\r\n\t\tiAudioPts += pAudioFrame->nb_samples;\r\n\t\t\r\n\t\tret = avcodec_send_frame(pAudioContext, pAudioFrame);\r\n\t\twhile (ret >= 0)\r\n\t\t{\r\n\t\t\tret = avcodec_receive_packet(pAudioContext, pAudioPacket);\r\n\t\t\tif (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)\r\n\t\t\t\tbreak;\r\n\t\t\tif (ret < 0) break;\r\n\r\n\t\t\t//TODO: do something with packet data\r\n\t\t\tif (m_clientSession != nullptr) m_clientSession->PrepareAudioPacketAndSend(pAudioPacket->data, pAudioPacket->size, pAudioFrame->pts);\r\n\r\n\t\t\tav_packet_unref(pAudioPacket);\r\n\t\t}\r\n\r\n\t\tsize = m_audioGrabber->audioBufferSize;\r\n\t}\r\n}\r\n\r\nvoid ScreenCaptureSession::initEncoder()\r\n{\r\n\tav_register_all();\r\n\r\n\t//-----------------------------------------------------------------\r\n\t// init video encoder\r\n\t//-----------------------------------------------------------------\r\n\tconst AVCodec* videoCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);\r\n\tpVideoContext = avcodec_alloc_context3(videoCodec);\r\n\t// TODO: update by config here\r\n\tpVideoContext->bit_rate = 1200000;\r\n\tpVideoContext->width = 400;\r\n\tpVideoContext->height = 240;\r\n\tpVideoContext->time_base = AVRational { 1, mFrameRate };\r\n\tpVideoContext->framerate = AVRational { mFrameRate, 1 };\r\n\tpVideoContext->gop_size = 18;\r\n\tpVideoContext->max_b_frames = 2;\r\n\tpVideoContext->block_align = 8;\r\n\tpVideoContext->pix_fmt = AV_PIX_FMT_YUV420P;\r\n\t// Open\r\n\tERROR_PRINT(avcodec_open2(pVideoContext, videoCodec, NULL));\r\n\tpVideoPacket = av_packet_alloc();\r\n\tpVideoFrame = av_frame_alloc();\r\n\tpVideoFrame->format = pVideoContext->pix_fmt;\r\n\tpVideoFrame->width = pVideoContext->width;\r\n\tpVideoFrame->height = pVideoContext->height;\r\n\tERROR_PRINT(av_frame_get_buffer(pVideoFrame, 8)); // align 8 ?\r\n\tpVideoScaler = sws_getContext(\r\n\t\t\tiSourceWidth, iSourceHeight, AV_PIX_FMT_BGRA,\r\n\t\t\tpVideoContext->width, pVideoContext->height, AV_PIX_FMT_YUV420P,\r\n\t\t\t\tSWS_BICUBIC, 0, 0, 0);\r\n\r\n\t//-----------------------------------------------------------------\r\n\t// init audio encoder\r\n\t//-----------------------------------------------------------------\r\n\tconst AVCodec* audioCodec = avcodec_find_encoder(AV_CODEC_ID_MP2);\r\n\tpAudioContext = avcodec_alloc_context3(audioCodec);\r\n\t//pAudioContext->time_base = AVRational{ 1 , 30 };\r\n\tpAudioContext->bit_rate = 48000;\r\n\tpAudioContext->sample_fmt = audioCodec->sample_fmts[0];\r\n\tpAudioContext->sample_rate = dst_rate;\r\n\tpAudioContext->channels = av_get_channel_layout_nb_channels(AV_CH_LAYOUT_STEREO);;\r\n\tpAudioContext->channel_layout = AV_CH_LAYOUT_STEREO;\r\n\tpAudioContext->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;\r\n\tint ret = avcodec_open2(pAudioContext, audioCodec, NULL);\n\n\tpAudioPacket = av_packet_alloc();\n\tpAudioFrame = av_frame_alloc();\n\tpAudioFrame->nb_samples = pAudioContext->frame_size;\n\tpAudioFrame->format = pAudioContext->sample_fmt;\n\tpAudioFrame->channel_layout = pAudioContext->channel_layout;\n\tERROR_PRINT(av_frame_get_buffer(pAudioFrame, 0));\r\n\r\n\r\n\r\n\t/* create resampler context */\r\n\tswr_ctx = swr_alloc();\r\n\tif (!swr_ctx) {\r\n\t\tfprintf(stderr, \"Could not allocate resampler context\\n\");\r\n\t\tret = AVERROR(ENOMEM);\r\n\t\treturn;\r\n\t}\r\n\r\n\t/* set options */\r\n\tav_opt_set_int(swr_ctx, \"in_channel_layout\", src_ch_layout, 0);\r\n\tav_opt_set_int(swr_ctx, \"in_sample_rate\", src_rate, 0);\r\n\tav_opt_set_sample_fmt(swr_ctx, \"in_sample_fmt\", src_sample_fmt, 0);\r\n\r\n\tav_opt_set_int(swr_ctx, \"out_channel_layout\", dst_ch_layout, 0);\r\n\tav_opt_set_int(swr_ctx, \"out_sample_rate\", dst_rate, 0);\r\n\tav_opt_set_sample_fmt(swr_ctx, \"out_sample_fmt\", dst_sample_fmt, 0);\r\n\r\n\t/* initialize the resampling context */\r\n\tif ((ret = swr_init(swr_ctx)) < 0) {\r\n\t\tfprintf(stderr, \"Failed to initialize the resampling context\\n\");\r\n\t\treturn;\r\n\t}\r\n\r\n\tfloat factor = (float)src_rate / (float)dst_rate;\r\n\r\n\t/* allocate source and destination samples buffers */\r\n\tsrc_nb_channels = av_get_channel_layout_nb_channels(src_ch_layout);\r\n\tret = av_samples_alloc_array_and_samples(&src_data, &src_linesize, src_nb_channels, factor * pAudioFrame->nb_samples, src_sample_fmt, 0);\r\n\tif (ret < 0) {\r\n\t\tfprintf(stderr, \"Could not allocate source samples\\n\");\r\n\t\treturn;\r\n\t}\r\n\r\n\r\n\t// test\r\n\t//testAudioOutFLAC = fopen(\"test_audio.bin\", \"wb\");\r\n\r\n\tmInitializedCodec = true;\r\n}\r\n\r\nvoid ScreenCaptureSession::releaseEncoder()\r\n{\r\n\t// free video\r\n\tavcodec_free_context(&pVideoContext);\r\n\tav_frame_free(&pVideoFrame);\r\n\tav_packet_free(&pVideoPacket);\r\n\t// free audio\r\n\tavcodec_free_context(&pAudioContext);\r\n\tav_frame_free(&pAudioFrame);\r\n\tav_packet_free(&pAudioPacket);\r\n\r\n\tdelete mLastFrameData;\r\n}\r\n\r\n\r\nvoid ScreenCaptureSession::setParent(PPServer* parent)\r\n{\r\n\tm_server = parent;\r\n}\r\n\r\nvoid ScreenCaptureSession::startStream()\r\n{\r\n\tif (m_isStartStreaming) return;\r\n\tm_isStartStreaming = true;\r\n\tiVideoFrameIndex = 0;\r\n\r\n\tinitScreenCapture();\r\n}\r\n\r\nvoid ScreenCaptureSession::stopStream()\r\n{\r\n\tif (!m_isStartStreaming) return;\r\n\tm_isStartStreaming = false;\r\n\tInitialized = false;\r\n\t// release frame grabber\r\n\tm_frameGrabber->pause();\r\n\tm_frameGrabber = nullptr;\r\n\t// release audio\r\n\tm_audioGrabber->Pause();\r\n\r\n\t// release encoder\r\n\treleaseEncoder();\r\n}\r\n\r\nvoid ScreenCaptureSession::registerClientSession(PPClientSession* sesison)\r\n{\r\n\tm_clientSession = sesison;\r\n}\r\n"
  },
  {
    "path": "PinBoxServer/PinBoxServer/ScreenCaptureSession.h",
    "content": "#pragma once\r\n\r\n#ifndef _SCREEN_CAPTURE_SESSION_H__\r\n#define _SCREEN_CAPTURE_SESSION_H__\r\n\r\n// frame capture\r\n#include \"ScreenCapture.h\"\r\n#include \"PPMessage.h\"\r\n#include \"PPClientSession.h\"\r\n\r\n//encode\r\n#include \"webp/encode.h\"\r\n#include <turbojpeg.h>\r\n#include <opencv2/core.hpp>\r\n#include <opencv2/imgcodecs.hpp>\r\n#include <opencv2/highgui.hpp>\r\n#include <opencv2/opencv.hpp>\r\n#include \"AudioStreamSession.h\"\r\n#include <thread>\r\n\r\n\r\n//ffmpeg\r\nextern \"C\" {\r\n#include <libavutil/common.h>\r\n#include <libavutil/imgutils.h>\r\n#include <libavutil/samplefmt.h>\r\n#include <libavutil/timestamp.h>\r\n#include <libavformat/avformat.h>\r\n#include <libavformat/avio.h>\r\n#include <libavutil/file.h>\r\n#include <libavutil/opt.h>\r\n#include <libswscale/swscale.h>\r\n#include <libswresample/swresample.h>\r\n#include \"libavutil/audio_fifo.h\"\r\n#include <libavutil/channel_layout.h>\r\n}\r\n\r\ntypedef struct\r\n{\r\n\tu8* piece;\r\n\tu32 size;\r\n\tu8 index;\r\n\tu32 frameIndex;\r\n} FramePiece;\r\n\r\ntypedef std::function<void()> OnClientReallyClose;\r\n\r\n#define ENCODE_TYPE_WEBP 0x01\r\n#define ENCODE_TYPE_JPEG_TURBO 0x02\r\n#define ENCODE_TYPE_MPEG4 0x03\r\n\r\n#define TRANSFER_BUFFER_SIZE 0x50000\r\n\r\n#define MEMORY_BUFFER_SIZE 0x1400000\r\n#define MEMORY_BUFFER_PADDING 0x04\r\n\r\ntypedef struct MemoryBuffer {\r\n\tu8* pBufferAddr;\r\n\tu32 iCursor;\r\n\tu32 iSize;\r\n\tu32 iMaxSize;\r\n\tstd::mutex*\tpMutex;\r\n\r\n\tvoid write(u8* buf, u32 size)\r\n\t{\r\n\t\tif (iSize < size) return;\r\n\t\tpMutex->lock();\r\n\t\tmemcpy(pBufferAddr + iCursor, buf, size);\r\n\t\tiCursor += size;\r\n\t\tiSize -= size;\r\n\t\tpMutex->unlock();\r\n\t}\r\n\r\n\tint read(u8* buf, u32 size)\r\n\t{\r\n\t\tif (iCursor == 0) return -1;\r\n\t\tpMutex->lock();\r\n\t\tint ret = FFMIN(iCursor, size);\r\n\t\tmemcpy(buf, pBufferAddr, ret);\r\n\t\tiSize += ret;\r\n\t\tiCursor -= ret;\r\n\t\tpMutex->unlock();\r\n\t\treturn ret;\r\n\t}\r\n}MemoryBuffer;\r\n\r\ntypedef struct FPSCounter {\r\n\tu32 onNewFramecounter = 0;\r\n\tu32 currentFPS = 0;\r\n\tstd::chrono::time_point<std::chrono::steady_clock> onNewFramestart = std::chrono::high_resolution_clock::now();\r\n\tstd::chrono::time_point<std::chrono::steady_clock> onFrameChanged = std::chrono::high_resolution_clock::now();\r\n};\r\n\r\ntypedef struct FrameData {\r\n\tuint8_t*\t\t\t\tDataAddr;\r\n\tint\t\t\t\t\t\tWidth;\r\n\tint\t\t\t\t\t\tHeight;\r\n\tint\t\t\t\t\t\tStrideWidth;\r\n\tbool\t\t\t\t\tDrew;\r\n}FrameData;\r\n\r\n\r\nclass ScreenCaptureSession\r\n{\r\nprivate:\r\n\tPPServer*\t\t\t\t\t\t\t\t\t\t\t\t\tm_server;\r\n\tPPClientSession*\t\t\t\t\t\t\t\t\t\t\tm_clientSession = nullptr;\r\n\r\n\tstd::shared_ptr<SL::Screen_Capture::IScreenCaptureManager>\tm_frameGrabber;\r\n\tAudioStreamSession*\t\t\t\t\t\t\t\t\t\t\tm_audioGrabber;\r\n\tbool\t\t\t\t\t\t\t\t\t\t\t\t\t\tm_isStartStreaming = false;\r\n\r\n\tu8\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tm_encodeType = ENCODE_TYPE_MPEG4;\r\n\r\n\r\n\r\n\tint\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tmFrameRate;\r\n\r\n\t// Video\r\n\tint\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tiSourceWidth;\r\n\tint\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tiSourceHeight;\r\n\r\n\tAVCodecContext*\t\t\t\t\t\t\t\t\t\t\t\tpVideoContext;\r\n\tAVPacket*\t\t\t\t\t\t\t\t\t\t\t\t\tpVideoPacket;\r\n\tAVFrame*\t\t\t\t\t\t\t\t\t\t\t\t\tpVideoFrame;\r\n\tSwsContext*\t\t\t\t\t\t\t\t\t\t\t\t\tpVideoScaler;\r\n\tu32\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tiVideoFrameIndex = 0;\r\n\tvoid\t\t\t\t\t\t\t\t\t\t\t\t\t\tencodeVideoFrame(u8* buf);\r\n\r\n\t// Audio\r\n\tAVCodecContext*\t\t\t\t\t\t\t\t\t\t\t\tpAudioContext;\r\n\tAVPacket*\t\t\t\t\t\t\t\t\t\t\t\t\tpAudioPacket;\r\n\tAVFrame*\t\t\t\t\t\t\t\t\t\t\t\t\tpAudioFrame;\r\n\tint64_t\t\t\t\t\t\t\t\t\t\t\t\t\t\tiAudioPts = 0;\r\n\tvoid\t\t\t\t\t\t\t\t\t\t\t\t\t\tencodeAudioFrame();\r\n\r\n\r\n\r\n\r\n\r\n\tbool\t\t\t\t\t\t\t\t\t\t\t\t\t\tmInitializedCodec = false;\r\n\tu32\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tmLastSentFrame = 0;\r\n\tvolatile bool\t\t\t\t\t\t\t\t\t\t\t\tmIsStopEncode = false;\r\n\tvoid\t\t\t\t\t\t\t\t\t\t\t\t\t\tinitEncoder();\r\n\tvoid\t\t\t\t\t\t\t\t\t\t\t\t\t\treleaseEncoder();\r\npublic:\r\n\tScreenCaptureSession();\r\n\t~ScreenCaptureSession();\r\n\r\n\tbool\t\t\t\t\t\t\t\t\t\tInitialized = false;\r\n\r\n\tbool\t\t\t\t\t\t\t\t\t\tmIsFirstFrame = true;\r\n\tFrameData*\t\t\t\t\t\t\t\t\tmLastFrameData = nullptr;\r\n\r\n\tvoid\t\t\t\t\t\t\t\t\t\tsetParent(PPServer* parent);\r\n\r\n\tvoid\t\t\t\t\t\t\t\t\t\tstartStream();\r\n\tvoid\t\t\t\t\t\t\t\t\t\tstopStream();\r\n\tvoid\t\t\t\t\t\t\t\t\t\tregisterClientSession(PPClientSession* sesison);\r\n\tvoid\t\t\t\t\t\t\t\t\t\tinitScreenCapture();\r\n\r\n\tu32\t\t\t\t\t\t\t\t\t\t\tcurrentFrame() const { return iVideoFrameIndex; }\r\n\tbool\t\t\t\t\t\t\t\t\t\tisStreaming() const { return m_isStartStreaming; }\r\n};\r\n\r\n#endif"
  },
  {
    "path": "PinBoxServer/PinBoxServer/ServerConfig.cpp",
    "content": "#include \"stdafx.h\"\r\n#include \"ServerConfig.h\"\r\n#include <ScreenCapture.h>\r\n#include <fstream>\r\n#include <sstream>\r\n\r\n\r\nvoid genRandom(char *s, const int len) {\r\n\tstatic const char alphanum[] =\r\n\t\t\"0123456789\"\r\n\t\t\"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\r\n\t\t\"abcdefghijklmnopqrstuvwxyz\";\r\n\r\n\tfor (int i = 0; i < len; ++i) {\r\n\t\ts[i] = alphanum[rand() % (sizeof(alphanum) - 1)];\r\n\t}\r\n\ts[len] = 0;\r\n}\r\n\r\n\r\nServerConfig::ServerConfig()\r\n{\r\n\tLoadConfig();\r\n\tLoadHubItems();\r\n}\r\n\r\n\r\nServerConfig::~ServerConfig()\r\n{\r\n}\r\n\r\nstatic ServerConfig* mInstance = nullptr;\r\nServerConfig* ServerConfig::Get()\r\n{\r\n\tif(mInstance == nullptr)\r\n\t{\r\n\t\tmInstance = new ServerConfig();\r\n\t}\r\n\treturn mInstance;\r\n}\r\n\r\nvoid ServerConfig::LoadConfig()\r\n{\r\n\tlibconfig::Config configFile;\r\n\ttry\r\n\t{\r\n\t\tconfigFile.readFile(\"server.cfg\");\r\n\t}\r\n\tcatch (const libconfig::FileIOException& fioex)\r\n\t{\r\n\t\tstd::cout << \"[Error] Server config file was not found.\" << std::endl;\r\n\t}\r\n\tcatch (const libconfig::ParseException& pex)\r\n\t{\r\n\t\tstd::cout << \"[Error] Server config file corrupted.\" << std::endl;\r\n\t}\r\n\tconst libconfig::Setting& root = configFile.getRoot();\r\n\tstd::cout << \"=========== SERVER CONFIG ===================\" << std::endl;\r\n\tint monitor = root.lookup(\"monitor_index\");\r\n\tMonitorIndex = monitor;\r\n\tstd::cout << \" Monitor Index: \" << MonitorIndex << std::endl;\r\n\r\n\tint captureFPS = root.lookup(\"capture_fps\");\r\n\tCaptureFPS = captureFPS;\r\n\tif (CaptureFPS <= 0) CaptureFPS = 30;\r\n\tstd::cout << \" FPS: \" << CaptureFPS << std::endl;\r\n\r\n\tint networkThreads = root.lookup(\"network_threads\");\r\n\tNetworkThread = networkThreads;\r\n\tif (NetworkThread <= 0) NetworkThread = 2;\r\n\tstd::cout << \" Network Threads: \" << NetworkThread << std::endl;\r\n\r\n\tint serverPort = root.lookup(\"server_port\");\r\n\tServerPort = serverPort;\r\n\tif (ServerPort <= 0) ServerPort = 1234;\r\n\tstd::cout << \" Server Port: \" << ServerPort << std::endl;\r\n\r\n\tstd::cout << \"=============================================\" << std::endl << std::flush;\r\n}\r\n\r\nvoid ServerConfig::LoadHubItems()\r\n{\r\n\tlibconfig::Config configFile;\r\n\ttry\r\n\t{\r\n\t\tconfigFile.readFile(\"hub.cfg\");\r\n\t}\r\n\tcatch (const libconfig::FileIOException& fioex)\r\n\t{\r\n\t\tstd::cout << \"[Error] Hub config file was not found.\" << std::endl;\r\n\t}\r\n\tcatch (const libconfig::ParseException& pex)\r\n\t{\r\n\t\tstd::cout << \"[Error] Hub config file corrupted.\" << std::endl;\r\n\t}\r\n\r\n\tauto monitors = SL::Screen_Capture::GetMonitors();\r\n\tfor(auto mon : monitors)\r\n\t{\r\n\t\tHubItem *hubItem = new HubItem();\r\n\t\tstd::ostringstream stringStream;\r\n\t\tstringStream << \"Monitor \" << mon.Index + 1;\r\n\t\thubItem->name = stringStream.str();\r\n\r\n\t\tchar ranUUID[100];\r\n\t\tgenRandom(ranUUID, 16);\r\n\t\thubItem->uuid = ranUUID;\r\n\r\n\t\thubItem->type = HUB_SCREEN;\r\n\t\tHubItems.push_back(hubItem);\r\n\t}\r\n\r\n\r\n\tconst libconfig::Setting &root = configFile.getRoot();\r\n\tconst libconfig::Setting &hub = root[\"hub\"];\r\n\tint count = hub.getLength();\r\n\tfor(int i = 0; i < count; ++i)\r\n\t{\r\n\t\tconst libconfig::Setting &item = hub[i];\r\n\t\tHubItem *hubItem = new HubItem();\r\n\r\n\t\tif(!(item.lookupValue(\"name\", hubItem->name) && \r\n\t\t\titem.lookupValue(\"uuid\", hubItem->uuid) &&\r\n\t\t\titem.lookupValue(\"thumbImage\", hubItem->thumbImage) &&\r\n\t\t\titem.lookupValue(\"exePath\", hubItem->exePath) &&\r\n\t\t\titem.lookupValue(\"processName\", hubItem->processName)))\r\n\t\t\tcontinue;\r\n\r\n\t\t// load image to buf\r\n\t\tstd::ifstream thumbFile(\"tmp\\\\\" + hubItem->thumbImage, std::ifstream::binary);\r\n\t\tif(!thumbFile)\r\n\t\t{\r\n\t\t\tstd::cout << \"[Error] thumbnail image was not found.\" << std::endl;\r\n\t\t}else\r\n\t\t{\r\n\t\t\t// get file size\r\n\t\t\tthumbFile.seekg(0, thumbFile.end);\r\n\t\t\tuint32_t length = thumbFile.tellg();\r\n\t\t\tthumbFile.seekg(0, thumbFile.beg);\r\n\r\n\t\t\thubItem->thumbSize = length;\r\n\t\t\thubItem->thumbBuf = (u8*)malloc(length);\r\n\t\t\tthumbFile.read((char*)hubItem->thumbBuf, length);\r\n\t\t\tthumbFile.close();\r\n\t\t}\r\n\r\n\t\thubItem->type = HUB_APP;\r\n\t\tHubItems.push_back(hubItem);\r\n\t}\r\n\r\n}\r\n\r\n\r\n"
  },
  {
    "path": "PinBoxServer/PinBoxServer/ServerConfig.h",
    "content": "#pragma once\r\n#ifndef _PP_SERVER_CONFIG_H__\r\n#define _PP_SERVER_CONFIG_H__\r\n#include <libconfig.h++>\r\n#include <iostream>\r\n#include \"HubItem.h\"\r\n#include <vector>\r\n\r\nclass ServerConfig\r\n{\r\npublic:\r\n\tServerConfig();\r\n\t~ServerConfig();\r\n\r\n\tstatic ServerConfig*\t\tGet();\r\n\r\n\tint\t\t\t\t\t\t\tMonitorIndex = 0;\r\n\tint\t\t\t\t\t\t\tCaptureFPS = 30;\r\n\tint\t\t\t\t\t\t\tNetworkThread = 2;\r\n\tint\t\t\t\t\t\t\tServerPort = 1234;\r\n\tvoid\t\t\t\t\t\tLoadConfig();\r\n\r\n\tstd::vector<HubItem*>\t\tHubItems;\r\n\tvoid\t\t\t\t\t\tLoadHubItems();\r\n};\r\n\r\n#endif"
  },
  {
    "path": "PinBoxServer/PinBoxServer/UIMainWindow.cpp",
    "content": "#include \"stdafx.h\"\r\n#include \"UIMainWindow.h\"\r\n\r\n\r\n\r\nUIMainWindow::UIMainWindow(QWidget* parent) : QMainWindow(parent), ui(new UIMainWindow)\r\n{\r\n\r\n}\r\n\r\nUIMainWindow::~UIMainWindow()\r\n{\r\n\tdelete ui;\r\n}\r\n"
  },
  {
    "path": "PinBoxServer/PinBoxServer/UIMainWindow.h",
    "content": "#pragma once\r\n#include <QtWidgets/QMainWindow>\r\n\r\nclass UIMainWindow : public QMainWindow\r\n{\r\n\tQ_OBJECT\r\npublic:\r\n\texplicit UIMainWindow(QWidget *parent = 0);\r\n\t~UIMainWindow();\r\nprivate:\r\n\tUIMainWindow *ui;\r\n};\r\n\r\n"
  },
  {
    "path": "PinBoxServer/PinBoxServer/const.h",
    "content": "#pragma once\n#define RESET   \"\\033[0m\"\n#define BLACK   \"\\033[30m\"      /* Black */\n#define RED     \"\\033[31m\"      /* Red */\n#define GREEN   \"\\033[32m\"      /* Green */\n#define YELLOW  \"\\033[33m\"      /* Yellow */\n#define BLUE    \"\\033[34m\"      /* Blue */\n#define MAGENTA \"\\033[35m\"      /* Magenta */\n#define CYAN    \"\\033[36m\"      /* Cyan */\n#define WHITE   \"\\033[37m\"      /* White */\n#define BOLDBLACK   \"\\033[1m\\033[30m\"      /* Bold Black */\n#define BOLDRED     \"\\033[1m\\033[31m\"      /* Bold Red */\n#define BOLDGREEN   \"\\033[1m\\033[32m\"      /* Bold Green */\n#define BOLDYELLOW  \"\\033[1m\\033[33m\"      /* Bold Yellow */\n#define BOLDBLUE    \"\\033[1m\\033[34m\"      /* Bold Blue */\n#define BOLDMAGENTA \"\\033[1m\\033[35m\"      /* Bold Magenta */\n#define BOLDCYAN    \"\\033[1m\\033[36m\"      /* Bold Cyan */\n#define BOLDWHITE   \"\\033[1m\\033[37m\"      /* Bold White */"
  },
  {
    "path": "PinBoxServer/PinBoxServer/hub.cfg",
    "content": "//--------------------------------------------\n// PinBox hub\n//--------------------------------------------\nhub = \n(\n\t{\n\t\tname = \"Cemu\";\n\t\tuuid = \"dd8501b5\";\n\t\tthumbImage = \"cemu.png\";\n\t\texePath = \"F:\\Games\\Cemu\\cemu_1.11.1\\Cemu.exe\";\n\t\tprocessName = \"Cemu 1.12.2d\";\n\t},\n\n\t{\n\t\tname = \"Cup Head\";\n\t\tuuid = \"cadfdc5e\";\n\t\tthumbImage = \"cuphead.png\";\n\t\texePath = \"F:\\Games\\Cuphead\\Cuphead.exe\";\n\t\tprocessName = \"Cuphead\";\n\t},\n\n\t{\n\t\tname = \"Pit People\";\n\t\tuuid = \"58b09523\";\n\t\tthumbImage = \"pitpeople.png\";\n\t\texePath = \"F:\\Games\\PitPeople\\pitpeople.exe\";\n\t\tprocessName = \"Pit People\";\n\t}\n);\n"
  },
  {
    "path": "PinBoxServer/PinBoxServer/input.cfg",
    "content": "//--------------------------------------------\r\n// PinBox input config\r\n//--------------------------------------------\r\n\r\n//--------------------------------------------\r\n// mouse speed when use circle pad\r\n// form: mouse speed * pos percent ( 0 -> 100%) * direction\r\n//--------------------------------------------\r\nmouse_speed = 15\r\n\r\n//--------------------------------------------\r\n// deadzone of circle pad ( min: 0 max: 156 default: 15)\r\n// this value should be around 5 ~ 20 -> 15 is best\r\n//--------------------------------------------\r\ncircle_pad_deadzone = 15\r\n\r\n\r\n//--------------------------------------------\r\n// input profiles\r\n//--------------------------------------------\r\ninput_profiles = \r\n(\r\n\t{\r\n\t\t//------------------------------------\r\n\t\t// [ REQUIRE PART ]\r\n\t\t//------------------------------------\r\n\r\n\t\t// max 255 char\r\n\t\tname = \"Mouse Support\";\r\n\r\n\t\ttype = \"keyboard\";\r\n\r\n\t\tbtn_A = \"L\";\r\n\t\tbtn_B = \"K\";\r\n\t\tbtn_X = \"I\";\r\n\t\tbtn_Y = \"J\";\r\n\r\n\t\tbtn_DPAD_UP = \"W\";\r\n\t\tbtn_DPAD_DOWN = \"S\";\r\n\t\tbtn_DPAD_LEFT = \"A\";\r\n\t\tbtn_DPAD_RIGHT = \"D\";\r\n\r\n\t\tbtn_START = \"Z\";\r\n\t\tbtn_SELECT = \"X\";\r\n\r\n\t\tbtn_L = \"U\";\r\n\t\tbtn_R = \"O\";\r\n\r\n\t\t//------------------------------------\r\n\t\t// [ OPTIONAL PART ]\r\n\t\t//------------------------------------\r\n\r\n\t\t// use circle pad as mouse\r\n\t\tcircle_pad_as_mouse = true;\r\n\r\n\t\t//new 3ds only\r\n\t\tzl_zr_as_mouse_button = true;\r\n\r\n\t\t// if not use as mouse then set it as button\r\n\t\t//btn_ZL = \"Y\";\r\n\t\t//btn_ZR = \"P\";\r\n\t},\r\n\r\n\t{\r\n\t\t//------------------------------------\r\n\t\t// [ REQUIRE PART ]\r\n\t\t//------------------------------------\r\n\r\n\t\t// max 255 char\r\n\t\tname = \"Xbox 360\";\r\n\r\n\t\t// simulate xbox 360 controller\r\n\t\ttype = \"x360\";\r\n\r\n\t}\r\n\r\n);"
  },
  {
    "path": "PinBoxServer/PinBoxServer/server.cfg",
    "content": "//--------------------------------------------\r\n// PinBox server config\r\n//--------------------------------------------\r\n\r\n//--------------------------------------------\r\n// index of monitor to capture\r\n// start from 0 to number of your monitor\r\n// eg: i have 3 monitor then index should be in range 0 .. 2\r\n//--------------------------------------------\r\nmonitor_index = 1;\r\n\r\n//--------------------------------------------\r\n// FPS : should be set to 30 for now\r\n//--------------------------------------------\r\ncapture_fps = 30;\r\n\r\n//--------------------------------------------\r\n// Network threads\r\n//--------------------------------------------\r\nnetwork_threads = 2;\r\n\r\n//--------------------------------------------\r\n// Server port\r\n//--------------------------------------------\r\nserver_port = 1234;\r\n"
  },
  {
    "path": "PinBoxServer/PinBoxServer/stdafx.cpp",
    "content": "// stdafx.cpp : source file that includes just the standard includes\r\n// PinBoxServer.pch will be the pre-compiled header\r\n// stdafx.obj will contain the pre-compiled type information\r\n\r\n#include \"stdafx.h\"\r\n\r\n// TODO: reference any additional headers you need in STDAFX.H\r\n// and not in this file\r\n"
  },
  {
    "path": "PinBoxServer/PinBoxServer/stdafx.h",
    "content": "// stdafx.h : include file for standard system include files,\r\n// or project specific include files that are used frequently, but\r\n// are changed infrequently\r\n//\r\n\r\n#pragma once\r\n\r\n#include \"targetver.h\"\r\n\r\n#include <stdio.h>\r\n#include <tchar.h>\r\n\r\n\r\n\r\n// TODO: reference additional headers your program requires here\r\n"
  },
  {
    "path": "PinBoxServer/PinBoxServer/targetver.h",
    "content": "#pragma once\r\n\r\n// Including SDKDDKVer.h defines the highest available Windows platform.\r\n\r\n// If you wish to build your application for a previous Windows platform, include WinSDKVer.h and\r\n// set the _WIN32_WINNT macro to the platform you wish to support before including SDKDDKVer.h.\r\n\r\n#include <SDKDDKVer.h>\r\n"
  },
  {
    "path": "PinBoxServer/PinBoxServer/winmain-inl.h",
    "content": "#pragma once\n\nnamespace {\n\tstruct OnApp {\n\t\tOnApp() {\n#ifdef WIN32\n\t\t\t// Initialize Winsock 2.2\n\t\t\tWSADATA wsaData;\n\t\t\tint err = WSAStartup(MAKEWORD(2, 2), &wsaData);\n\n\t\t\tif (err) {\n\t\t\t\tstd::cout << \"WSAStartup() failed with error: %d\" << err;\n\t\t\t}\n#endif\n\t\t}\n\t\t~OnApp() {\n#ifdef WIN32\n\t\t\tsystem(\"pause\");\n\t\t\tWSACleanup();\n#endif\n\t\t}\n\t} __s_onexit_pause;\n}\n"
  },
  {
    "path": "PinBoxServer/PinBoxServer.sln",
    "content": "﻿\r\nMicrosoft Visual Studio Solution File, Format Version 12.00\r\n# Visual Studio 14\r\nVisualStudioVersion = 14.0.25420.1\r\nMinimumVisualStudioVersion = 10.0.40219.1\r\nProject(\"{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}\") = \"PinBoxServer\", \"PinBoxServer\\PinBoxServer.vcxproj\", \"{944CB352-A2A2-4B65-AB3B-18566D0ABFA2}\"\r\nEndProject\r\nGlobal\r\n\tGlobalSection(SolutionConfigurationPlatforms) = preSolution\r\n\t\tDebug|x64 = Debug|x64\r\n\t\tDebug|x86 = Debug|x86\r\n\t\tRelease|x64 = Release|x64\r\n\t\tRelease|x86 = Release|x86\r\n\tEndGlobalSection\r\n\tGlobalSection(ProjectConfigurationPlatforms) = postSolution\r\n\t\t{944CB352-A2A2-4B65-AB3B-18566D0ABFA2}.Debug|x64.ActiveCfg = Debug|x64\r\n\t\t{944CB352-A2A2-4B65-AB3B-18566D0ABFA2}.Debug|x64.Build.0 = Debug|x64\r\n\t\t{944CB352-A2A2-4B65-AB3B-18566D0ABFA2}.Debug|x86.ActiveCfg = Debug|Win32\r\n\t\t{944CB352-A2A2-4B65-AB3B-18566D0ABFA2}.Debug|x86.Build.0 = Debug|Win32\r\n\t\t{944CB352-A2A2-4B65-AB3B-18566D0ABFA2}.Release|x64.ActiveCfg = Release|x64\r\n\t\t{944CB352-A2A2-4B65-AB3B-18566D0ABFA2}.Release|x64.Build.0 = Release|x64\r\n\t\t{944CB352-A2A2-4B65-AB3B-18566D0ABFA2}.Release|x86.ActiveCfg = Release|Win32\r\n\t\t{944CB352-A2A2-4B65-AB3B-18566D0ABFA2}.Release|x86.Build.0 = Release|Win32\r\n\tEndGlobalSection\r\n\tGlobalSection(SolutionProperties) = preSolution\r\n\t\tHideSolutionNode = FALSE\r\n\tEndGlobalSection\r\nEndGlobal\r\n"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject/PPDecoder.cpp",
    "content": "#include \"PPDecoder.h\"\n#include <iostream>\n#include <opencv2/core/mat.hpp>\n#include <opencv2/shape/hist_cost.hpp>\n#include <opencv2/highgui.hpp>\n\n// 0x1000 = 4096\n// 0x800 = 2048\n#define READ_DELAY 0x1000\n#define ERROR_PRINT(code) printError(code)\nvoid PPDecoder::printError(int errorCode)\n{\n\tif (errorCode >= 0) return;\n\tchar log[AV_ERROR_MAX_STRING_SIZE]{ 0 };\n\tstd::cout << \"Error: \" << av_make_error_string(log, AV_ERROR_MAX_STRING_SIZE, errorCode) << std::endl;\n}\n\nPPDecoder::PPDecoder()\n{\n}\n\nPPDecoder::~PPDecoder()\n{\n}\n\n#define READ_BUFFER_SIZE 0x800\nstatic u8* readBuffer = nullptr;\nstatic void videoDecodeThreadFunc(void* arg)\n{\n\tPPDecoder* self = reinterpret_cast<PPDecoder*>(arg);\n\t//-------------------------------------------------------\n\tint timeDelay = 1000.0f / (float)self->mFrameRate;\n\tauto timer = new Timer<int, std::milli>(std::chrono::duration<int, std::milli>(1));\n\t//-------------------------------------------------------\n\treadBuffer = (u8*)malloc(READ_BUFFER_SIZE + MEMORY_BUFFER_PADDING);\n\tmemset(readBuffer + READ_BUFFER_SIZE, 0, MEMORY_BUFFER_PADDING);\n\tself->pVideoPacket->size = -1;\n\tint ret = 0;\n\t//-------------------------------------------------------\n\twhile (true)\n\t{\n\t\ttimer->start();\n\t\t\n\t\tself->pVideoPacket->size = self->pVideoIOBuffer->read(self->pVideoPacket->data, READ_BUFFER_SIZE);\n\t\twhile(self->pVideoPacket->size > 0)\n\t\t{\n\t\t\tself->decodeVideoStream();\n\t\t}\n\n\t\ttimer->wait();\n\t}\n}\n\nvoid PPDecoder::initDecoder()\n{\n\tav_register_all();\n\tinitVideoStream();\n}\n\nvoid PPDecoder::initVideoStream()\n{\n\t//-----------------------------------------------------------------\n\t// init video encoder\n\t//-----------------------------------------------------------------\n\tconst AVCodec* videoCodec = avcodec_find_decoder(AV_CODEC_ID_MPEG4);\n\tpVideoParser = av_parser_init(videoCodec->id);\n\tpVideoContext = avcodec_alloc_context3(videoCodec);\n\t// Open\n\tint ret = avcodec_open2(pVideoContext, videoCodec, NULL);\n\tpVideoPacket = av_packet_alloc();\n\tpVideoFrame = av_frame_alloc();\n\t// Init custom memory buffer\n\tpVideoIOBuffer = new MemoryBuffer();\n\tpVideoIOBuffer->pBufferAddr = (u8*)malloc(MEMORY_BUFFER_SIZE + MEMORY_BUFFER_PADDING);\n\tmemset(pVideoIOBuffer->pBufferAddr + MEMORY_BUFFER_SIZE, 0, MEMORY_BUFFER_PADDING);\n\tpVideoIOBuffer->iSize = MEMORY_BUFFER_SIZE;\n\tpVideoIOBuffer->iMaxSize = MEMORY_BUFFER_SIZE;\n\tpVideoIOBuffer->iCursor = 0;\n\tpVideoIOBuffer->pMutex = new std::mutex();\n}\n\nvoid PPDecoder::initAudioStream()\n{\n}\n\nstatic u8* RGB = nullptr;\nvoid PPDecoder::decodeVideoStream()\n{\n\tint ret = 0;\n\tret = avcodec_send_packet(pVideoContext, pVideoPacket);\n\tif (ret < 0) return;\n\t//while (ret >= 0) {\n\t\tret = avcodec_receive_frame(pVideoContext, pVideoFrame);\n\t\tif (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) return;\n\t\telse if (ret < 0) exit(0);\n\n\t\t//printf(\"saving frame %3d\\n\", pVideoContext->frame_number);\n\t\t//fflush(stdout);\n\t\t//----------------------------------------------\n\t\t// process frame\n\t\t//----------------------------------------------\n\t\tint w = pVideoFrame->width;\n\t\tint h = pVideoFrame->height;\n\n\t\tu8* Y = (u8*)pVideoFrame->data[0];\n\t\tu8* U = (u8*)pVideoFrame->data[1];\n\t\tu8* V = (u8*)pVideoFrame->data[2];\n\n\t\tif (RGB == nullptr) RGB = (u8*)malloc(3 * w * h);\n\n\t\tyuv420_rgb24_std(w, h, Y, U, V, pVideoFrame->linesize[0], pVideoFrame->linesize[1], RGB, w * 3, YCBCR_JPEG);\n\n\t\tcv::Mat image(h, w, CV_8UC3, RGB);\n\t\tcv::cvtColor(image, image, CV_RGB2BGR);\n\t\tcv::imshow(\"video\", image);\n\t\tcv::waitKey(1);\n\t//}\n}\n\nvoid PPDecoder::decodeAudioStream()\n{\n\tint ret = 0;\n}\n\nvoid PPDecoder::startDecodeThread()\n{\n\tinitDecoder();\n\t//-------------------------------------------------------\n\t//std::thread decodeThread = std::thread(videoDecodeThreadFunc, this);\n\t//decodeThread.detach();\n}\n\n\nvoid PPDecoder::appendBuffer(u8* buffer, u32 size)\n{\n\t//pVideoIOBuffer->write(buffer, size);\n\tif (size <= 0) return;\n\tpVideoPacket->data = buffer;\n\tpVideoPacket->size = size;\n\tdecodeVideoStream();\n\n}\n\n"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject/PPDecoder.h",
    "content": "#pragma once\n#include \"PPMessage.h\"\n#include <mutex>\n\n//#include <3ds.h>\n\n//ffmpeg\nextern \"C\" {\n#include <libavutil/imgutils.h>\n#include <libavutil/samplefmt.h>\n#include <libavutil/timestamp.h>\n#include <libavformat/avformat.h>\n#include <libavformat/avio.h>\n#include <libavutil/file.h>\n#include <libavutil/opt.h>\n#include <libswscale/swscale.h>\n#include <libswresample/swresample.h>\n#include \"yuv_rgb.h\"\n}\n\n#define MEMORY_BUFFER_SIZE 0x1400000\n#define MEMORY_BUFFER_PADDING 0x04\n\ntypedef struct MemoryBuffer {\n\tu8* pBufferAddr;\n\tu32 iCursor;\n\tu32 iSize;\n\tu32 iMaxSize;\n\tstd::mutex*\tpMutex;\n\n\tvoid write(u8* buf, u32 size)\n\t{\n\t\tif (iSize < size) return;\n\t\tpMutex->lock();\n\t\tmemcpy(pBufferAddr + iCursor, buf, size);\n\t\tiCursor += size;\n\t\tiSize -= size;\n\t\tpMutex->unlock();\n\t}\n\n\tint read(u8* buf, u32 size)\n\t{\n\t\tif (iCursor == 0) return -1;\n\t\tpMutex->lock();\n\t\tint ret = FFMIN(iCursor, size);\n\t\tmemcpy(buf, pBufferAddr, ret);\n\t\tiSize += ret;\n\t\tiCursor -= ret;\n\t\tpMutex->unlock();\n\t\treturn ret;\n\t}\n}MemoryBuffer;\n\n\n#define AVIO_MIN(a,b) ((a) > (b) ? (b) : (a))\n\nclass ITimer {\npublic:\n\tITimer() {};\n\tvirtual ~ITimer() {}\n\tvirtual void start() = 0;\n\tvirtual void wait() = 0;\n};\ntemplate <class Rep, class Period> class Timer : public ITimer {\n\tstd::chrono::duration<Rep, Period> Rel_Time;\n\tstd::chrono::time_point<std::chrono::high_resolution_clock> StartTime;\n\tstd::chrono::time_point<std::chrono::high_resolution_clock> StopTime;\n\npublic:\n\tTimer(const std::chrono::duration<Rep, Period> &rel_time) : Rel_Time(rel_time) {};\n\tvirtual ~Timer() {}\n\tvirtual void start() { StartTime = std::chrono::high_resolution_clock::now(); }\n\tvirtual void wait()\n\t{\n\t\tauto duration = std::chrono::duration_cast<std::chrono::duration<Rep, Period>>(std::chrono::high_resolution_clock::now() - StartTime);\n\t\tauto timetowait = Rel_Time - duration;\n\t\tif (timetowait.count() > 0) {\n\t\t\tstd::this_thread::sleep_for(timetowait);\n\t\t}\n\t}\n};\n\nclass PPDecoder\n{\npublic:\n\t// io context\n\tMemoryBuffer*\t\t\t\t\t\t\t\t\t\t\t\tpVideoIOBuffer;\n\n\t// video stream\n\tAVCodecParserContext*\t\tpVideoParser;\n\tAVCodecContext*\t\t\t\tpVideoContext;\n\tAVPacket*\t\t\t\t\tpVideoPacket;\n\tAVFrame*\t\t\t\t\tpVideoFrame;\n\tu32\t\t\t\t\t\t\tiVideoFrameIndex = 0;\n\n\n\tint\t\t\t\t\t\t\tmFrameRate = 30;\n\tstatic void\t\t\t\t\tprintError(int errorCode);\npublic:\n\tPPDecoder();\n\t~PPDecoder();\n\n\tvoid initDecoder();\n\n\tvoid appendBuffer(u8* buffer, u32 size);\n\n\tvoid initVideoStream();\n\tvoid initAudioStream();\n\n\tvoid decodeVideoStream();\n\tvoid decodeAudioStream();\n\n\tvoid startDecodeThread();\n};"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject/PPMessage.cpp",
    "content": "#include \"PPMessage.h\"\r\n\r\n\r\nPPMessage::~PPMessage()\r\n{\r\n\tif (g_content != nullptr)\r\n\t\tfree(g_content);\r\n}\r\n\r\nu8* PPMessage::BuildMessage(u8* contentBuffer, u32 contentSize)\r\n{\r\n\r\n\tg_contentSize = contentSize;\r\n\t//-----------------------------------------------\r\n\t// alloc msg buffer block\r\n\tu8* msgBuffer = (u8*)malloc(sizeof(u8) * (contentSize + 9));\r\n\t//-----------------------------------------------\r\n\t// build header\r\n\tu8* pointer = msgBuffer;\r\n\t// 1, validate code\r\n\tWRITE_CHAR_PTR(pointer, g_validateCode, 4);\r\n\t// 2, message code\r\n\tWRITE_U8(pointer, g_code);\r\n\t// 3, content size\r\n\tWRITE_U32(pointer, g_contentSize);\r\n\r\n\t//-----------------------------------------------\r\n\t// build content data\r\n\tif (g_contentSize > 0) {\r\n\t\tmemcpy(msgBuffer + 9, contentBuffer, contentSize);\r\n\t}\r\n\t//-----------------------------------------------\r\n\treturn msgBuffer;\r\n}\r\n\r\nu8* PPMessage::BuildMessageEmpty()\r\n{\r\n\treturn BuildMessage(nullptr, 0);\r\n}\r\n\r\nvoid PPMessage::BuildMessageHeader(u8 code)\r\n{\r\n\tg_code = code;\r\n}\r\n\r\nbool PPMessage::ParseHeader(u8* buffer)\r\n{\r\n\tchar* validateCode = (char*)malloc(4);\r\n\tsize_t readIndex = 0;\r\n\tmemcpy(validateCode, buffer + readIndex, 4); readIndex += 4;\r\n\tif (!std::strcmp(validateCode, \"PPBX\"))\r\n\t{\r\n\t\tprintf(\"Parse header failed. Validate code is incorrect : %s\", validateCode);\r\n\t\treturn false;\r\n\t}\r\n\tfree(validateCode);\r\n\tvalidateCode = nullptr;\r\n\t//-----------------------------------------------------------\r\n\tg_code = READ_U8(buffer, readIndex); readIndex += 1;\r\n\tg_contentSize = READ_U32(buffer, readIndex); readIndex += 4;\r\n\treturn true;\r\n}\r\n"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject/PPMessage.h",
    "content": "#pragma once\r\n#ifndef _PP_MESSAGE_H_\r\n#define _PP_MESSAGE_H_\r\n\r\n#include <cstdint>\r\n#include <cstring>\r\n#include <cstdlib>\r\n#include <cstdio>\r\n\r\ntypedef unsigned char      u8;\r\ntypedef unsigned short     u16;\r\ntypedef unsigned int       u32;\r\n\r\n#define WRITE_CHAR_PTR(BUFFER, DATA, SIZE) memcpy(BUFFER, DATA, SIZE); BUFFER += SIZE;\r\n#define WRITE_U8(BUFFER, DATA) *(BUFFER++) = DATA;\r\n#define WRITE_U16(BUFFER, DATA) *(BUFFER++) = DATA; *(BUFFER++) = DATA >> 8;\r\n#define WRITE_U32(BUFFER, DATA) *(BUFFER++) = DATA; *(BUFFER++) = DATA >> 8; *(BUFFER++) = DATA >> 16; *(BUFFER++) = DATA >> 24;\r\n#define READ_U8(BUFFER, INDEX) BUFFER[INDEX];\r\n#define READ_U16(BUFFER, INDEX) BUFFER[INDEX] | BUFFER[INDEX + 1] << 8;\r\n#define READ_U32(BUFFER, INDEX) BUFFER[INDEX] | BUFFER[INDEX + 1] << 8 | BUFFER[INDEX + 2] << 16 | BUFFER[INDEX + 3] << 24;\r\n#define IS_INVALID_CODE(BUFFER, INDEX) BUFFER[INDEX] != 'P' || BUFFER[INDEX+1] != 'P' || BUFFER[INDEX+2] != 'B' || BUFFER[INDEX+3] != 'X'\r\n\r\nclass PPMessage\r\n{\r\nprivate:\r\n\t//-----------------------------------------------\r\n\t// message header - 9 bytes\r\n\t// [4b] validate code : \"PPBX\"\r\n\t// [1b] message code : 0 to 255 - define message type\r\n\t// [4b] message content size \r\n\t//-----------------------------------------------\r\n\tconst char\t\t\t\t\t\t\tg_validateCode[4] = { 'P','P','B','X' };\r\n\tu8\t\t\t\t\t\t\t\t\tg_code = 0;\r\n\tu32\t\t\t\t\t\t\t\t\tg_contentSize = 0;\r\n\r\n\t//-----------------------------------------------\r\n\t// message content\r\n\t//-----------------------------------------------\r\n\tu8*\t\t\t\t\t\t\t\t\tg_content = nullptr;\r\npublic:\r\n\t~PPMessage();\r\n\tu32\t\t\t\t\t\t\t\t\tGetMessageSize() const { return g_contentSize + 9; }\r\n\tu8*\t\t\t\t\t\t\t\t\tGetMessageContent() { return g_content; }\r\n\tu32\t\t\t\t\t\t\t\t\tGetContentSize() { return g_contentSize; }\r\n\tu8\t\t\t\t\t\t\t\t\tGetMessageCode() { return g_code; }\r\n\t//-----------------------------------------------\r\n\t// NOTE: after using this message, returned data must be free whenever it sent.\r\n\t//-----------------------------------------------\r\n\tu8*\t\t\t\t\t\t\t\t\tBuildMessage(u8* contentBuffer, u32 contentSize);\r\n\tu8*\t\t\t\t\t\t\t\t\tBuildMessageEmpty();\r\n\tvoid\t\t\t\t\t\t\t\tBuildMessageHeader(u8 code);\r\n\r\n\tbool\t\t\t\t\t\t\t\tParseHeader(u8* buffer);\r\n};\r\n\r\n#endif"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject/PPNetwork.cpp",
    "content": "#include \"PPNetwork.h\"\r\n#include <iostream>\r\n#include \"PPSession.h\"\r\n\r\n/*\r\n * @brief: thread handler function\r\n */\r\nvoid PPNetwork::ppNetwork_threadRun(void * arg)\r\n{\r\n\tPPNetwork* network = (PPNetwork*)arg;\r\n\tif (network == nullptr) return;\r\n\t//--------------------------------------------------\r\n\tuint64_t sleepDuration = 1000000ULL * 1;\r\n\twhile(!network->g_threadExit){\r\n\t\tswitch (network->g_connect_state)\r\n\t\t{\r\n\t\t\tcase IDLE:\r\n\t\t\t{\r\n\t\t\t\t//--------------------------------------------------\r\n\t\t\t\t// try to connect if not connected to server yet\r\n\t\t\t\tnetwork->ppNetwork_connectToServer();\r\n\t\t\t\tbreak;\r\n\t\t\t}\r\n\t\t\tcase CONNECTING:\r\n\t\t\t{\r\n\t\t\t\t// connecting state\r\n\t\t\t\tbreak;\r\n\t\t\t}\r\n\t\t\tcase CONNECTED:\r\n\t\t\t{\r\n\t\t\t\t//--------------------------------------------------\r\n\t\t\t\t//send queue message\r\n\t\t\t\tnetwork->ppNetwork_sendMessage();\r\n\r\n\t\t\t\t//--------------------------------------------------\r\n\t\t\t\t// listen to server when it connected successfully\r\n\t\t\t\tnetwork->ppNetwork_listenToServer();\r\n\t\t\t\t\r\n\t\t\t\tbreak;\r\n\t\t\t}\r\n\t\t\tcase FAIL:\r\n\t\t\t{\r\n\t\t\t\t//--------------------------------------------------\r\n\t\t\t\t// Exit thread when connection fail\r\n\t\t\t\tnetwork->g_threadExit = true;\r\n\t\t\t\tbreak;\r\n\t\t\t}\r\n\t\t\tdefault:\r\n\t\t\t{\r\n\t\t\t\tbreak;\r\n\t\t\t}\r\n\t\t}\r\n#ifndef _WIN32\r\n\t\tsvcSleepThread(sleepDuration);\r\n#else\r\n\t\tstd::this_thread::sleep_for(std::chrono::nanoseconds(sleepDuration));\r\n#endif\r\n\t}\r\n\t//--------------------------------------------------\r\n\t// close connection if thread is exit\r\n\tnetwork->ppNetwork_closeConnection();\r\n}\r\n\r\nvoid PPNetwork::ppNetwork_sendMessage()\r\n{\r\n\t//TODO: is it thread safe ? should we add mutex on this ?\r\n\tif (this->g_sendingMessage.size() > 0)\r\n\t{\r\n\t\tQueueMessage* queueMsg = (QueueMessage*)this->g_sendingMessage.front();\r\n\t\tthis->g_sendingMessage.pop();\r\n\t\t//--------------------------------------------------\r\n\t\tif (queueMsg->msgSize > 0 && queueMsg->msgBuffer != nullptr)\r\n\t\t{\r\n\t\t\tuint32_t totalSent = 0;\r\n\t\t\t//--------------------------------------------------------\r\n\t\t\t// send message\r\n\t\t\tdo\r\n\t\t\t{\r\n\t\t\t\tint sendAmount = send(this->g_sock, (char*)queueMsg->msgBuffer, queueMsg->msgSize, 0);\r\n\t\t\t\tif (sendAmount < 0)\r\n\t\t\t\t{\r\n\t\t\t\t\t// ERROR when send message\r\n\t\t\t\t\treturn;\r\n\t\t\t\t}\r\n\t\t\t\ttotalSent += sendAmount;\r\n\t\t\t} while (totalSent < queueMsg->msgSize);\r\n\t\t\t//std::cout << \"[Queue]Session: #\" << g_session->sessionID << \" send message done! \" << std::endl;\r\n\t\t\t//--------------------------------------------------------\r\n\t\t\t// free message\r\n\t\t\tfree(queueMsg->msgBuffer);\r\n\t\t\tdelete queueMsg;\r\n\t\t}\r\n\t}\r\n}\r\n\r\nvoid PPNetwork::ppNetwork_connectToServer()\r\n{\r\n\tg_connect_state = CONNECTING;\r\n\t//--------------------------------------------------\r\n\t// Trying to get address information\r\n\tstruct addrinfo hints, *servinfo, *p;\r\n\tmemset(&hints, 0, sizeof hints);\r\n\thints.ai_family = AF_INET;\r\n\thints.ai_socktype = SOCK_STREAM;\r\n\thints.ai_protocol = IPPROTO_TCP;\r\n\tint rv = getaddrinfo(g_ip, g_port, &hints, &servinfo);\r\n\tif(rv != 0)\r\n\t{\r\n\t\tprintf(\"Can't get address information.\\n\");\r\n\t\t// Error: fail to get address\r\n\t\tg_connect_state = FAIL;\r\n\t\tWSACleanup();\r\n\t\treturn;\r\n\t}\r\n\t//--------------------------------------------------\r\n\t// define socket\r\n\tg_sock = socket(hints.ai_family, hints.ai_socktype, hints.ai_protocol);\r\n\tif (g_sock == -1)\r\n\t{\r\n\t\tprintf(\"Can't create new socket.\\n\");\r\n\t\t// Error: can't create socket\r\n\t\tg_connect_state = FAIL;\r\n\t\tfreeaddrinfo(servinfo);\r\n\t\tWSACleanup();\r\n\t\treturn;\r\n\t}\r\n\r\n\t//--------------------------------------------------\r\n\t// loop thought all result and connect\r\n\tfor (p = servinfo; p != NULL; p = p->ai_next)\r\n\t{\r\n\t\t//--------------------------------------------------\r\n\t\t// try to connect to server\r\n\t\tauto ret = connect(g_sock, p->ai_addr, p->ai_addrlen);\r\n\t\tif (ret == -1)\r\n\t\t{\r\n\t\t\t// Error: can't connect to this ai -> try next\r\n\t\t\tg_connect_state = FAIL;\r\n\t\t\tcontinue;\r\n\t\t}\r\n\r\n\t\t//--------------------------------------------------\r\n\t\t// if it run here so it already connecteds\r\n\t\tg_connect_state = CONNECTED;\r\n\r\n\t\tbreak;\r\n\t}\r\n\r\n\t//--------------------------------------------------\r\n\t// Note: check if connected\r\n\tif (g_connect_state == CONNECTED)\r\n\t{\r\n\t\t//--------------------------------------------------\r\n\t\t// set socket to non blocking so we can easy control it\r\n\t\t//fcntl(sockManager->sock, F_SETFL, O_NONBLOCK);\r\n#ifndef _WIN32\r\n\t\tfcntl(g_sock, F_SETFL, fcntl(g_sock, F_GETFL, 0) | O_NONBLOCK);\r\n#else\r\n\t\tunsigned long on = 1;\r\n\t\tioctlsocket(g_sock, FIONBIO, &on);\r\n#endif\r\n\t\t//--------------------------------------------------\r\n\t\t// callback when connected to server\r\n\t\tif (g_onConnectionSuccessed != nullptr) g_onConnectionSuccessed(nullptr, 1);\r\n\t\tprintf(\"Connected to server.\\n\");\r\n\t}else\r\n\t{\r\n\t\tprintf(\"Could not connect to server.\\n\");\r\n\t\tfreeaddrinfo(servinfo);\r\n\t\tWSACleanup();\r\n\t}\r\n}\r\n\r\nvoid PPNetwork::ppNetwork_onReceivedRequet()\r\n{\r\n\t//--------------------------------------------------\r\n\t// NOTE: callback step\r\n\t//--------------------------------------------------\r\n\t// we pass a copy version received buffer to where it need to process\r\n\tu8 *resultBuffer = (u8*)malloc(g_waitForSize);\r\n\tmemcpy(resultBuffer, g_receivedBuffer, g_waitForSize);\r\n\tint32_t tmpSize = g_waitForSize;\r\n\tint32_t tmpTag = g_tag;\r\n\t//--------------------------------------------------\r\n\t// free this data and point to nullptr\r\n\tfree(g_receivedBuffer);\r\n\tg_receivedBuffer = nullptr;\r\n\tg_waitForSize = 0;\r\n\tg_receivedCounter = 0;\r\n\tg_tag = 0;\r\n\t//--------------------------------------------------\r\n\t// this buffer need to be free whenever it finish it's job\r\n\tif(g_onReceivedRequest != nullptr)\r\n\t\tg_onReceivedRequest(resultBuffer, tmpSize, tmpTag);\r\n\t\r\n}\r\n\r\n/*\r\n * @brief: process current tmp buffer data if it have\r\n */\r\nbool PPNetwork::ppNetwork_processTmpBufferData()\r\n{\r\n\tif (g_tmpReceivedBuffer != nullptr)\r\n\t{\r\n\t\t//--------------------------------------------------\r\n\t\t// if this data is more than current request data\r\n\t\tint32_t pieceSize = g_tmpReceivedSize;\r\n\t\tint32_t dataLeft = 0;\r\n\t\tbool isFullyReceived = false;\r\n\t\tif(g_tmpReceivedSize >= g_waitForSize)\r\n\t\t{\r\n\t\t\tisFullyReceived = true;\r\n\t\t\tpieceSize = g_waitForSize;\r\n\t\t\tdataLeft = g_tmpReceivedSize - g_waitForSize;\r\n\t\t}\r\n\t\t//--------------------------------------------------\r\n\t\t// copy data into current received buffer\r\n\t\tif (!g_receivedBuffer)\r\n\t\t\tg_receivedBuffer = (u8*)malloc(g_waitForSize);\r\n\t\tif (!g_receivedBuffer) return false;\r\n\t\tmemcpy(g_receivedBuffer + g_receivedCounter, g_tmpReceivedBuffer, pieceSize);\r\n\t\tg_receivedCounter += pieceSize;\r\n\r\n\t\tif(isFullyReceived)\r\n\t\t{\r\n\t\t\t//--------------------------------------------------\r\n\t\t\t// we finished current request\r\n\t\t\tppNetwork_onReceivedRequet();\r\n\t\t\t//--------------------------------------------------\r\n\t\t\t// if there is still have data left\r\n\t\t\tif (dataLeft > 0)\r\n\t\t\t{\r\n\t\t\t\t//--------------------------------------------------\r\n\t\t\t\t// cut off received part and create new data\r\n\t\t\t\tu8* tmpBuffer = (u8*)malloc(dataLeft);\r\n\t\t\t\tmemcpy(tmpBuffer, g_tmpReceivedBuffer + pieceSize, dataLeft);\r\n\t\t\t\tg_tmpReceivedSize = dataLeft;\r\n\t\t\t\tfree(g_tmpReceivedBuffer);\r\n\t\t\t\tg_tmpReceivedBuffer = tmpBuffer;\r\n\r\n\t\t\t\t// return false to stop current listen process\r\n\t\t\t\treturn false;\r\n\t\t\t}\r\n\t\t}\r\n\t\tfree(g_tmpReceivedBuffer);\r\n\t\tg_tmpReceivedBuffer = nullptr;\r\n\t\tg_tmpReceivedSize = 0;\r\n\t}\r\n\treturn true;\r\n}\r\n\r\n\r\nvoid PPNetwork::ppNetwork_listenToServer()\r\n{\r\n\t//--------------------------------------------------\r\n\t// If there is no order to receive any data so we just \r\n\t// continue our threads\r\n\tif (g_waitForSize <= 0)\r\n\t{\r\n\t\treturn;\r\n\t}\r\n\r\n\t//--------------------------------------------------\r\n\t// if tmp data is not null so that mean we need add this datao\r\n\t// into current request data\r\n\t// example case: 2 messages data send together from server\r\n\t//---------------------------------------------------\r\n\t// stop process if there is have tmp buffer data and it fit\r\n\t// request message\r\n\tif (!ppNetwork_processTmpBufferData()) return;\r\n\r\n\t//--------------------------------------------------\r\n\t// Recevie data from server \r\n\tconst int bufferSize = 1024 * 24; // 24Kb buffer size\r\n\tu8* recvBuffer = (u8*)malloc(bufferSize);\r\n\tuint32_t recvAmount = -1;\r\n\trecvAmount = recv(g_sock, (char*)recvBuffer, bufferSize, 0);\r\n\t//--------------------------------------------------\r\n\t// exit thread when have connecting problem\r\n\tif (recvAmount <= 0) {\r\n#ifndef _WIN32\r\n\t\tif (errno != EWOULDBLOCK) g_threadExit = true;\r\n#else\r\n\t\tif (errno != WSAEWOULDBLOCK) g_threadExit = true;\r\n#endif\r\n\t\t//--------------------------------------------------\r\n\t\t// free data\r\n\t\tfree(recvBuffer);\r\n\t\treturn;\r\n\t}else if(recvAmount > bufferSize)\r\n\t{\r\n\t\t//--------------------------------------------------\r\n\t\t// free data\r\n\t\tfree(recvBuffer);\r\n\t\treturn;\r\n\t}else\r\n\t{\r\n\t\t//std::cout << \"Client: \" << g_session->sessionID << \" recv size: \" << recvAmount << std::endl;\r\n\t\t//--------------------------------------------------\r\n\t\tif(g_receivedBuffer == nullptr)\r\n\t\t\tg_receivedBuffer = (u8*)malloc(g_waitForSize);\r\n\t\t//--------------------------------------------------\r\n\t\tint32_t pieceSize = recvAmount;\r\n\t\tint32_t dataLeft = 0;\r\n\t\tbool isFullyReceived = false;\r\n\t\t//--------------------------------------------------\r\n\t\t// check if data is fully recevied\r\n\t\tif(g_receivedCounter + recvAmount >= g_waitForSize) {\r\n\t\t\tpieceSize = g_waitForSize - g_receivedCounter;\r\n\t\t\tdataLeft = recvAmount - pieceSize;\r\n\t\t\tisFullyReceived = true;\r\n\t\t}\r\n\t\t//--------------------------------------------------\r\n\t\t// copy data to final buffer\r\n\t\tmemcpy(g_receivedBuffer + g_receivedCounter, recvBuffer, pieceSize);\r\n\t\tg_receivedCounter += pieceSize;\r\n\t\t//--------------------------------------------------\r\n\t\tif (isFullyReceived) {\r\n\t\t\t//--------------------------------------------------\r\n\t\t\t// set received request and free g_receivedBuffer\r\n\t\t\tppNetwork_onReceivedRequet();\r\n\t\t\t//--------------------------------------------------\r\n\t\t\t// store data left into temp buffer\r\n\t\t\tif (dataLeft > 0)\r\n\t\t\t{\r\n\t\t\t\t//---------------------------------------------------\r\n\t\t\t\t// free tmp buffer if it not null \r\n\t\t\t\t//NOTE: is it possible if going to this part without being nullptr ?\r\n\t\t\t\tif (g_tmpReceivedBuffer != nullptr) {\r\n\t\t\t\t\tfree(g_tmpReceivedBuffer);\r\n\t\t\t\t\tg_tmpReceivedBuffer = nullptr;\r\n\t\t\t\t}\r\n\t\t\t\tg_tmpReceivedBuffer = (u8*)malloc(dataLeft);\r\n\t\t\t\tmemcpy(g_tmpReceivedBuffer, recvBuffer + pieceSize, dataLeft);\r\n\t\t\t\tg_tmpReceivedSize = dataLeft;\r\n\t\t\t}\r\n\t\t}\r\n\t}\r\n\r\n\t//--------------------------------------------------\r\n\t// free data\r\n\tfree(recvBuffer);\r\n}\r\n\r\nvoid PPNetwork::ppNetwork_closeConnection()\r\n{\r\n\tif(g_connect_state == CONNECTED)\r\n\t{\r\n\t\tif (g_sock == -1) return;\r\n\t\tint rc = closesocket(g_sock);\r\n\t\tif(rc != 0)\r\n\t\t{\r\n\t\t\t// Error : ? can't close socket\r\n\t\t}\r\n\t\tg_sock = -1;\r\n\t\tg_connect_state = IDLE;\r\n\t}\r\n\t//--------------------------------------------------\r\n\t//clear all buffer data if need\r\n\tif (g_receivedBuffer) {\r\n\t\tfree(g_receivedBuffer);\r\n\t\tg_receivedBuffer = nullptr;\r\n\t\tg_waitForSize = 0;\r\n\t\tg_receivedCounter = 0;\r\n\t\tg_tag = 0;\r\n\t}\r\n\tif(g_tmpReceivedBuffer)\r\n\t{\r\n\t\tfree(g_tmpReceivedBuffer);\r\n\t\tg_tmpReceivedBuffer = nullptr;\r\n\t\tg_tmpReceivedSize = 0;\r\n\t}\r\n\t//--------------------------------------------------\r\n\t//TODO: clear all message sending buffer\r\n\t// for each g_sendingMessage -> delete msg\r\n\t//--------------------------------------------------\r\n\t// callback when connection closed\r\n\tif(g_onConnectionClosed != nullptr)\r\n\t{\r\n\t\tg_onConnectionClosed(nullptr, -1);\r\n\t}\r\n}\r\n\r\n\r\n//===============================================================================================\r\n// Controller functions\r\n//===============================================================================================\r\n#define STACKSIZE (30 * 1024)\r\nvoid PPNetwork::Start(const char* ip, const char* port)\r\n{\r\n\tif (g_connect_state == IDLE) {\r\n\t\tprintf(\"Start connect to server...\\n\");\r\n\t\tg_sendingMessage = std::queue<QueueMessage*>();\r\n\t\t//---------------------------------------------------\r\n\t\t// init variables\r\n#ifndef _WIN32\r\n\t\tg_ip = strdup(ip);\r\n\t\tg_port = strdup(port);\r\n#else\r\n\t\tg_ip = _strdup(ip);\r\n\t\tg_port = _strdup(port);\r\n#endif\r\n\t\tg_threadExit = false;\r\n\t\t//---------------------------------------------------\r\n\t\t// start thread\r\n#ifndef _WIN32\r\n\t\tint32_t prio = 0;\r\n\t\tsvcGetThreadPriority(&prio, CUR_THREAD_HANDLE);\r\n\t\tg_thread = threadCreate(ppNetwork_threadRun, this, STACKSIZE, prio - 1, -2, true);\r\n#else\r\n\t\t// after create we detach thread to make it continues without blocking and no need to call join.\r\n\t\tg_thread = std::thread(ppNetwork_threadRun, this);\r\n\t\tg_thread.detach();\r\n#endif\r\n\t}else\r\n\t{\r\n\t\t// log -> already start network connection\r\n\t}\r\n}\r\n\r\nvoid PPNetwork::Stop()\r\n{\r\n\tg_threadExit = true;\r\n}\r\n\r\nvoid PPNetwork::SetRequestData(int32_t size, int32_t tag)\r\n{\r\n\tif(g_waitForSize > 0)\r\n\t{\r\n\t\t//std::cout << \"Client: \" << g_session->sessionID << \" send wrong request data. Request for wait still have value: \" << g_waitForSize << std::endl;\r\n\t}else\r\n\t{\r\n\t\t//std::cout << \"Client: \" << g_session->sessionID << \" register wait for size: \" << size << \" with tag: \" << tag << std::endl;\r\n\t\tg_waitForSize = size;\r\n\t\tg_tag = tag;\r\n\t}\r\n}\r\n\r\n//----------------------------------------------------------\r\n// NOTE: this function will normally run on main thread\r\n//----------------------------------------------------------\r\nvoid PPNetwork::SendMessageData(u8 *msgBuffer, int32_t msgSize)\r\n{\r\n\tif (g_connect_state == CONNECTED) {\r\n\t\tQueueMessage* msg = new QueueMessage();\r\n\t\tmsg->msgBuffer = msgBuffer;\r\n\t\tmsg->msgSize = msgSize;\r\n\t\tg_sendingMessage.push(msg);\r\n\t}else\r\n\t{\r\n\t\tstd::cout << \"Session: #\" << g_session->sessionID << \" send message when not connected\" << std::endl;\r\n\t}\r\n}\r\n\r\nPPNetwork::~PPNetwork()\r\n{\r\n\tfree(&g_ip);\r\n\tfree(&g_port);\r\n\tif (g_receivedBuffer != nullptr) free(g_receivedBuffer);\r\n\tif (g_tmpReceivedBuffer != nullptr) free(g_tmpReceivedBuffer);\r\n}\r\n"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject/PPNetwork.h",
    "content": "#pragma once\r\n#ifndef _PP_NETWORK_H_\r\n#define _PP_NETWORK_H_\r\n\r\n#include <string.h>\r\n#include <WinSock2.h>\r\n#include <iphlpapi.h>\r\n#include <ws2tcpip.h>\r\n#include <sys/types.h>\r\n#include <fcntl.h>\r\n#include <memory>\r\n#include <cerrno>\r\n#include <functional>\r\n#include \"PPMessage.h\"\r\n#include <queue>\r\n#include <thread> \r\n\r\n#pragma comment(lib, \"Ws2_32.lib\")\r\n\r\nenum ppConectState { IDLE, CONNECTING, CONNECTED, FAIL };\r\ntypedef std::function<void(u8* buffer, u32 size, u32 tag)> PPNetworkReceivedRequest;\r\ntypedef std::function<void(u8* data, u32 code)> PPNetworkCallback;\r\n\r\ntypedef struct\r\n{\r\n\tvoid\t\t\t*msgBuffer;\r\n\tint32_t\t\t\tmsgSize;\r\n} QueueMessage;\r\n\r\nclass PPSession;\r\n\r\nclass PPNetwork\r\n{\r\nprivate:\r\n\t// socket\r\n\tconst char*\t\t\t\t\tg_ip = 0;\r\n\tconst char*\t\t\t\t\tg_port = 0;\r\n\tu32\t\t\t\t\t\t\tg_sock = -1;\r\n\tppConectState\t\t\t\tg_connect_state = IDLE;\r\n\t// in-out variables\r\n\tu32\t\t\t\t\t\t\tg_tag = 0;\r\n\tu32\t\t\t\t\t\t\tg_waitForSize = 0;\r\n\tu32\t\t\t\t\t\t\tg_receivedCounter = 0;\r\n\tu8*\t\t\t\t\t\t\tg_receivedBuffer = nullptr;\r\n\tu8*\t\t\t\t\t\t\tg_tmpReceivedBuffer = nullptr;\r\n\tu32\t\t\t\t\t\t\tg_tmpReceivedSize = 0;\r\n\t// thread\r\n\tstd::thread\t\t\t\t\tg_thread;\r\n\tvolatile bool\t\t\t\tg_threadExit = false;\r\n\tstd::queue<QueueMessage*>\tg_sendingMessage;\r\n\t// callback\r\n\tPPNetworkReceivedRequest\tg_onReceivedRequest = nullptr;\r\n\tPPNetworkCallback\t\t\tg_onConnectionSuccessed = nullptr;\r\n\tPPNetworkCallback\t\t\tg_onConnectionClosed = nullptr;\r\n\r\nprivate:\r\n\tstatic void ppNetwork_threadRun(void * arg);\r\n\tvoid ppNetwork_listenToServer();\r\n\tvoid ppNetwork_connectToServer();\r\n\tvoid ppNetwork_closeConnection();\r\n\tvoid ppNetwork_onReceivedRequet();\r\n\tbool ppNetwork_processTmpBufferData();\r\n\tvoid ppNetwork_sendMessage();\r\n\r\npublic:\r\n\t~PPNetwork();\r\n\r\n\tPPSession*\t\t\t\t\tg_session;\r\n\r\n\tvoid Start(const char *ip, const char *port);\r\n\tvoid Stop();\r\n\tppConectState GetConnectionStatus() const { return g_connect_state; }\r\n\tvoid SetRequestData(int32_t size, int32_t tag = 0);\r\n\tvoid SendMessageData(u8 *msgBuffer, int32_t msgSize);\r\n\r\n\tinline void SetOnReceivedRequest(PPNetworkReceivedRequest _callback) { g_onReceivedRequest = _callback; }\r\n\tinline void SetOnConnectionSuccessed(PPNetworkCallback _callback) { g_onConnectionSuccessed = _callback; }\r\n\tinline void SetOnConnectionClosed(PPNetworkCallback _callback) { g_onConnectionClosed = _callback; }\r\n};\r\n\r\n#endif"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject/PPSession.cpp",
    "content": "#include \"PPSession.h\"\r\n#include \"PPSessionManager.h\"\r\n\r\nPPSession::~PPSession()\r\n{\r\n\tif (g_network != nullptr) delete g_network;\r\n}\r\n\r\n\r\nvoid PPSession::initSession()\r\n{\r\n\tif (g_network != nullptr) return;\r\n\tg_network = new PPNetwork();\r\n\tg_network->g_session = this;\r\n\tprintf(\"Init new session.\\n\");\r\n\t//------------------------------------------------------\r\n\t// callback when request data is received\r\n\tg_network->SetOnReceivedRequest([=](u8* buffer, u32 size, u32 tag) {\r\n\t\t//------------------------------------------------------\r\n\t\t// verify authentication\r\n\t\t//------------------------------------------------------\r\n\t\tif (!g_authenticated)\r\n\t\t{\r\n\t\t\tif (tag == PPREQUEST_AUTHEN)\r\n\t\t\t{\r\n\t\t\t\t// check for authentication\r\n\t\t\t\tPPMessage *authenMsg = new PPMessage();\r\n\t\t\t\tif(authenMsg->ParseHeader(buffer))\r\n\t\t\t\t{\r\n\t\t\t\t\tif(authenMsg->GetMessageCode() == MSG_CODE_RESULT_AUTHENTICATION_SUCCESS)\r\n\t\t\t\t\t{\r\n\t\t\t\t\t\tprintf(\"Authentication sucessfully.\\n\");\r\n\t\t\t\t\t\tg_authenticated = true;\r\n\t\t\t\t\t\tif (g_onAuthenSuccessed != nullptr) g_onAuthenSuccessed(nullptr, 0);\r\n\t\t\t\t\t}else\r\n\t\t\t\t\t{\r\n\t\t\t\t\t\tprintf(\"Authentication failed.\\n\");\r\n\t\t\t\t\t\treturn;\r\n\t\t\t\t\t}\r\n\t\t\t\t}else\r\n\t\t\t\t{\r\n\t\t\t\t\tprintf(\"Authentication failed.\\n\");\r\n\t\t\t\t\treturn;\r\n\t\t\t\t}\r\n\t\t\t\tdelete authenMsg;\r\n\t\t\t}\r\n\t\t\telse\r\n\t\t\t{\r\n\t\t\t\tprintf(\"Client was not authentication.\\n\");\r\n\t\t\t\treturn;\r\n\t\t\t}\r\n\t\t}\r\n\t\t//------------------------------------------------------\r\n\t\t// process data by tag\r\n\t\tswitch (tag)\r\n\t\t{\r\n\t\tcase PPREQUEST_HEADER:\r\n\t\t{\r\n\t\t\tif (!g_tmpMessage) g_tmpMessage = new PPMessage();\r\n\t\t\tif (g_tmpMessage->ParseHeader(buffer))\r\n\t\t\t{\r\n\t\t\t\t//std::cout << \"Client: #\" << sessionID << \" received message header: \" << (u32)g_tmpMessage->GetMessageCode() << std::endl;\r\n\t\t\t\t//----------------------------------------------------\r\n\t\t\t\t// request body part of this message\r\n\t\t\t\tg_network->SetRequestData(g_tmpMessage->GetContentSize(), PPREQUEST_BODY);\r\n\t\t\t}\r\n\t\t\telse\r\n\t\t\t{\r\n\t\t\t\tdelete g_tmpMessage;\r\n\t\t\t\tg_tmpMessage = nullptr;\r\n\t\t\t\tprintf(\"Parse message header fail. remove message.\\n\");\r\n\t\t\t}\r\n\t\t\tbreak;\r\n\t\t}\r\n\t\tcase PPREQUEST_BODY:\r\n\t\t{\r\n\t\t\t//------------------------------------------------------\r\n\t\t\t// if tmp message is null that mean this is useless data then we avoid it\r\n\t\t\tif (!g_tmpMessage) return;\r\n\t\t\t// verify buffer size with message estimate size\r\n\t\t\tif (size == g_tmpMessage->GetContentSize())\r\n\t\t\t{\r\n\t\t\t\t// process message\r\n\t\t\t\tswitch (g_sessionType)\r\n\t\t\t\t{\r\n\t\t\t\tcase PPSESSION_MOVIE:\r\n\t\t\t\t{\r\n\t\t\t\t\tprocessMovieSession(buffer, size);\r\n\t\t\t\t\tbreak;\r\n\t\t\t\t}\r\n\t\t\t\tcase PPSESSION_SCREEN_CAPTURE:\r\n\t\t\t\t{\r\n\t\t\t\t\tprocessScreenCaptureSession(buffer, size);\r\n\t\t\t\t\tbreak;\r\n\t\t\t\t}\r\n\t\t\t\tcase PPSESSION_INPUT_CAPTURE:\r\n\t\t\t\t{\r\n\t\t\t\t\tprocessInputSession(buffer, size);\r\n\t\t\t\t\tbreak;\r\n\t\t\t\t}\r\n\t\t\t\t}\r\n\r\n\t\t\t\t//----------------------------------------------------------\r\n\t\t\t\t// Request for next message\r\n\t\t\t\tg_network->SetRequestData(MSG_COMMAND_SIZE, PPREQUEST_HEADER);\r\n\t\t\t}\r\n\t\t\t//------------------------------------------------------\r\n\t\t\t// remove message after use\r\n\t\t\tdelete g_tmpMessage;\r\n\t\t\tg_tmpMessage = nullptr;\r\n\t\t\tbreak;\r\n\t\t}\r\n\t\tdefault: break;\r\n\t\t}\r\n\t\t//------------------------------------------------------\r\n\t\t// free buffer after use\r\n\t\tfree(buffer);\r\n\t});\r\n}\r\n\r\nvoid PPSession::InitMovieSession()\r\n{\r\n\tinitSession();\r\n\t//--------------------------------------\r\n\t// init specific for movie session\r\n\tg_sessionType = PPSESSION_MOVIE;\r\n\tprintf(\"Init movie session.\\n\");\r\n}\r\n\r\nvoid PPSession::InitScreenCaptureSession(PPSessionManager* manager)\r\n{\r\n\tg_manager = manager;\r\n\tinitSession();\r\n\t//--------------------------------------\r\n\t// init specific for movie session\r\n\tg_sessionType = PPSESSION_SCREEN_CAPTURE;\r\n\tprintf(\"Init screen capture session.\\n\");\r\n\t//--------------------------------------\r\n\tSS_framePiecesCached = std::map<u32, FramePiece*>();\r\n}\r\n\r\nvoid PPSession::InitInputCaptureSession()\r\n{\r\n\tinitSession();\r\n\t//--------------------------------------\r\n\t// init specific for movie session\r\n\tg_sessionType = PPSESSION_INPUT_CAPTURE;\r\n\tprintf(\"Init input session.\\n\");\r\n}\r\n\r\nvoid PPSession::StartSession(const char* ip, const char* port, PPNetworkCallback authenSuccessed)\r\n{\r\n\tif (g_network == nullptr) return;\r\n\tg_onAuthenSuccessed = authenSuccessed;\r\n\tg_network->SetOnConnectionSuccessed([=](u8* data, u32 size)\r\n\t{\r\n\t\tstd::cout << \"Session: #\" << sessionID << \" send authentication message\" << std::endl;\r\n\t\t//NOTE: this not called on main thread !\r\n\t\t//--------------------------------------------------\r\n\t\tint8_t code = 0;\r\n\t\tif (g_sessionType == PPSESSION_MOVIE) code = MSG_CODE_REQUEST_AUTHENTICATION_MOVIE;\r\n\t\telse if (g_sessionType == PPSESSION_SCREEN_CAPTURE) code = MSG_CODE_REQUEST_AUTHENTICATION_SCREEN_CAPTURE;\r\n\t\telse if (g_sessionType == PPSESSION_INPUT_CAPTURE) code = MSG_CODE_REQUEST_AUTHENTICATION_INPUT;\r\n\t\tif(code == 0)\r\n\t\t{\r\n\t\t\tprintf(\"Invalid session type.\\n\");\r\n\t\t\treturn;\r\n\t\t}\r\n\t\t//--------------------------------------------------\r\n\t\t// screen capture session authen\r\n\t\tPPMessage *authenMsg = new PPMessage();\r\n\t\tauthenMsg->BuildMessageHeader(code);\r\n\t\tu8* msgBuffer = authenMsg->BuildMessageEmpty();\r\n\t\t//--------------------------------------------------\r\n\t\t// send authentication message\r\n\t\tg_network->SendMessageData(msgBuffer, authenMsg->GetMessageSize());\r\n\t\t//--------------------------------------------------\r\n\t\t// set request to get result message\r\n\t\tg_network->SetRequestData(MSG_COMMAND_SIZE, PPREQUEST_AUTHEN);\r\n\t\tdelete authenMsg;\r\n\t});\r\n\tg_network->SetOnConnectionClosed([=](u8* data, u32 size)\r\n\t{\r\n\t\t//NOTE: this not called on main thread !\r\n\t\tstd::cout << \"Session: #\" << sessionID << \" get connection interupted\" << std::endl;\r\n\t});\r\n\tg_network->Start(ip, port);\r\n}\r\n\r\nvoid PPSession::CloseSession()\r\n{\r\n\tif (g_network == nullptr) return;\r\n\tg_network->Stop();\r\n\tg_authenticated = false;\r\n\tSS_Reset();\r\n}\r\n\r\n\r\nvoid PPSession::processMovieSession(u8* buffer, size_t size)\r\n{\r\n\t//NOTE: this not called on main thread !\r\n\t//------------------------------------------------------\r\n\t// process message data\tby message type\r\n\t//------------------------------------------------------\r\n\tswitch (g_tmpMessage->GetMessageCode())\r\n\t{\r\n\tdefault: break;\r\n\t}\r\n}\r\n\r\n\r\n\r\nvoid PPSession::processScreenCaptureSession(u8* buffer, size_t size)\r\n{\r\n\t//NOTE: this not called on main thread !\r\n\t//------------------------------------------------------\r\n\t// process message data\tby message type\r\n\t//------------------------------------------------------\r\n\tswitch (g_tmpMessage->GetMessageCode())\r\n\t{\r\n\tcase MSG_CODE_REQUEST_NEW_SCREEN_FRAME:\r\n\t{\r\n\t\ttry {\r\n\t\t\t//------------------------------------------------------\r\n\t\t\t// If using waiting for new frame then we need:\r\n\t\t\t{\r\n\t\t\t\t//std::cout << \"Session #\" << sessionID << \" have received: \" << size << std::endl;\r\n\t\t\t\t//--------------------------------------------------\r\n\t\t\t\t// send request that client received frame\r\n\t\t\t\tPPMessage *msgObj = new PPMessage();\r\n\t\t\t\tmsgObj->BuildMessageHeader(MSG_CODE_REQUEST_SCREEN_RECEIVED_FRAME);\r\n\t\t\t\tu8* msgBuffer = msgObj->BuildMessageEmpty();\r\n\t\t\t\t//--------------------------------------------------\r\n\t\t\t\t// send authentication message\r\n\t\t\t\tg_network->SendMessageData(msgBuffer, msgObj->GetMessageSize());\r\n\t\t\t\tdelete msgObj;\r\n\t\t\t}\r\n\t\t\t//--------------------------------------------------\r\n\t\t\t//store packet buffer in static buffer\r\n\t\t\tg_manager->AppendBuffer(buffer, size);\r\n\r\n\t\t}catch(cv::Exception& e)\r\n\t\t{\r\n\t\t\tstd::cout << e.msg << std::endl;\r\n\t\t}\r\n\t\tbreak;\r\n\t}\r\n\r\n\tdefault: break;\r\n\t}\r\n}\r\n\r\nvoid PPSession::processInputSession(u8* buffer, size_t size)\r\n{\r\n\t//NOTE: this not called on main thread !\r\n\t//------------------------------------------------------\r\n\t// process message data\tby message type\r\n\t//------------------------------------------------------\r\n\tswitch (g_tmpMessage->GetMessageCode())\r\n\t{\r\n\tdefault: break;\r\n\t}\r\n}\r\n\r\n//-----------------------------------------------------\r\n// screen capture\r\n//-----------------------------------------------------\r\nvoid PPSession::SS_StartStream()\r\n{\r\n\tPPMessage *authenMsg = new PPMessage();\r\n\tauthenMsg->BuildMessageHeader(MSG_CODE_REQUEST_START_SCREEN_CAPTURE);\r\n\tu8* msgBuffer = authenMsg->BuildMessageEmpty();\r\n\tg_network->SendMessageData(msgBuffer, authenMsg->GetMessageSize());\r\n\tg_network->SetRequestData(MSG_COMMAND_SIZE, PPREQUEST_HEADER);\r\n\tSS_v_isStartStreaming = true;\r\n\tdelete authenMsg;\r\n}\r\n\r\nvoid PPSession::SS_StopStream()\r\n{\r\n\tPPMessage *authenMsg = new PPMessage();\r\n\tauthenMsg->BuildMessageHeader(MSG_CODE_REQUEST_STOP_SCREEN_CAPTURE);\r\n\tu8* msgBuffer = authenMsg->BuildMessageEmpty();\r\n\tg_network->SendMessageData(msgBuffer, authenMsg->GetMessageSize());\r\n\tg_network->SetRequestData(MSG_COMMAND_SIZE, PPREQUEST_HEADER);\r\n\tSS_v_isStartStreaming = false;\r\n\tdelete authenMsg;\r\n}\r\n\r\nvoid PPSession::SS_ChangeSetting()\r\n{\r\n\tPPMessage *authenMsg = new PPMessage();\r\n\tauthenMsg->BuildMessageHeader(MSG_CODE_REQUEST_CHANGE_SETTING_SCREEN_CAPTURE);\r\n\r\n\t//-----------------------------------------------\r\n\t// alloc msg content block\r\n\tsize_t contentSize = 13;\r\n\tu8* contentBuffer = (u8*)malloc(sizeof(u8) * contentSize);\r\n\tu8* pointer = contentBuffer;\r\n\t//----------------------------------------------\r\n\t// setting: wait for received frame\r\n\tu8 _setting_waitToReceivedFrame = SS_setting_waitToReceivedFrame ? 1 : 0;\r\n\tWRITE_U8(pointer, _setting_waitToReceivedFrame);\r\n\t// setting: smooth frame number ( only activate if waitForReceivedFrame = true)\r\n\tWRITE_U32(pointer, SS_setting_smoothStepFrames);\r\n\t// setting: frame quality [0 ... 100]\r\n\tWRITE_U32(pointer, SS_setting_sourceQuality);\r\n\t// setting: frame scale [0 ... 100]\r\n\tWRITE_U32(pointer, SS_setting_sourceScale);\r\n\t//-----------------------------------------------\r\n\t// build message\r\n\tu8* msgBuffer = authenMsg->BuildMessage(contentBuffer, contentSize);\r\n\tg_network->SendMessageData(msgBuffer, authenMsg->GetMessageSize());\r\n\tg_network->SetRequestData(MSG_COMMAND_SIZE, PPREQUEST_HEADER);\r\n\tdelete authenMsg;\r\n}\r\n\r\nvoid PPSession::SS_Reset()\r\n{\r\n\tSS_v_isStartStreaming = false;\r\n}\r\n\r\nFramePiece* PPSession::SafeGetFramePiece(u32 index)\r\n{\r\n\tSS_frameCachedMutex.lock();\r\n\tauto iter = SS_framePiecesCached.find(index);\r\n\tif(iter != SS_framePiecesCached.end())\r\n\t{\r\n\t\tFramePiece* piece = iter->second;\r\n\t\tSS_framePiecesCached.erase(iter);\r\n\t\tSS_frameCachedMutex.unlock();\r\n\t\treturn piece;\r\n\t}else\r\n\t{\r\n\t\tSS_frameCachedMutex.unlock();\r\n\t\treturn nullptr;\r\n\t}\r\n\t\r\n}\r\n\r\nvoid PPSession::RequestForheader()\r\n{\r\n\tg_network->SetRequestData(MSG_COMMAND_SIZE, PPREQUEST_HEADER);\r\n}\r\n"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject/PPSession.h",
    "content": "#pragma once\r\n#ifndef _PP_SESSION_H_\r\n#define _PP_SESSION_H_\r\n\r\n//=======================================================================\r\n// PinBox Video Session\r\n// container streaming data can be movie or screen capture\r\n// With:\r\n// 1, Movie :\r\n// TODO: using ffmpeg to decode movie stream from server\r\n// TODO: support 3D movie on N3DS\r\n// TODO: mainly support RGB565 for lower size and speed up stream\r\n// 2, Screen Capture:\r\n// TODO: capture entire screen or a part of screen that config in server\r\n//-----------------------------------------------------------------------\r\n// Note: each session is running standalone and only 1 session can be run\r\n// at a time.\r\n//=======================================================================\r\n#include \"PPNetwork.h\"\r\n#include <iostream>\r\n#include <mutex>\r\n#include <map>\r\n\r\nenum PPSession_Type { PPSESSION_NONE, PPSESSION_MOVIE, PPSESSION_SCREEN_CAPTURE, PPSESSION_INPUT_CAPTURE};\r\n\r\n#define MSG_COMMAND_SIZE 9\r\n\r\n#define PPREQUEST_AUTHEN 50\r\n#define PPREQUEST_HEADER 10\r\n#define PPREQUEST_BODY 15\r\n// authentication code\r\n#define MSG_CODE_REQUEST_AUTHENTICATION_MOVIE 1\r\n#define MSG_CODE_REQUEST_AUTHENTICATION_SCREEN_CAPTURE 2\r\n#define MSG_CODE_REQUEST_AUTHENTICATION_INPUT 3\r\n#define MSG_CODE_RESULT_AUTHENTICATION_SUCCESS 5\r\n#define MSG_CODE_RESULT_AUTHENTICATION_FAILED 6\r\n// screen capture code\r\n#define MSG_CODE_REQUEST_START_SCREEN_CAPTURE 10\r\n#define MSG_CODE_REQUEST_STOP_SCREEN_CAPTURE 11\r\n#define MSG_CODE_REQUEST_CHANGE_SETTING_SCREEN_CAPTURE 12\r\n#define MSG_CODE_REQUEST_NEW_SCREEN_FRAME 15\r\n#define MSG_CODE_REQUEST_SCREEN_RECEIVED_FRAME 16\r\n#define MSG_CODE_REQUEST_NEW_AUDIO_FRAME 18\r\n#define MSG_CODE_REQUEST_RECEIVED_AUDIO_FRAME 19\r\n\r\n\r\n\r\n#define AUDIO_CHANNEL\t0x08\r\n\r\ntypedef struct\r\n{\r\n\tu32 frameIndex;\r\n\tu32 pieceIndex;\r\n\tu8* piece;\r\n\tu32 pieceSize;\r\n\tvoid release() { if (piece != nullptr) free(piece); }\r\n\r\n} FramePiece;\r\n\r\nclass PPSessionManager;\r\n\r\nclass PPSession\r\n{\r\nprivate:\r\n\t\r\n\tPPSessionManager\t\t\t\t*g_manager;\r\n\tPPSession_Type\t\t\t\t\tg_sessionType = PPSESSION_NONE;\r\n\tPPNetwork*\t\t\t\t\t\tg_network = nullptr;\r\n\tPPMessage*\t\t\t\t\t\tg_tmpMessage = nullptr;\r\n\tbool\t\t\t\t\t\t\tg_authenticated = false;\r\n\tPPNetworkCallback\t\t\t\tg_onAuthenSuccessed = nullptr;\r\nprivate:\r\n\tvoid initSession();\r\n\r\n\tvoid processMovieSession(u8* buffer, size_t size);\r\n\tvoid processScreenCaptureSession(u8* buffer, size_t size);\r\n\tvoid processInputSession(u8* buffer, size_t size);\r\npublic:\r\n\tint\t\t\t\t\t\t\t\tsessionID = -1;\r\n\t~PPSession();\r\n\r\n\tvoid InitMovieSession();\r\n\tvoid InitScreenCaptureSession(PPSessionManager* manager);\r\n\tvoid InitInputCaptureSession();\r\n\r\n\tvoid StartSession(const char* ip ,const char* port, PPNetworkCallback authenSuccessed);\r\n\tvoid CloseSession();\r\n\r\n\t//-----------------------------------------------------\r\n\t// screen capture\r\n\t//-----------------------------------------------------\r\nprivate:\r\n\t//----------------------------------------------------------------------\r\n\t// profile setting\r\n\ttypedef struct\r\n\t{\r\n\t\tstd::string\t\t\t\t\t\tprofileName = \"Default\";\r\n\t\tbool\t\t\t\t\t\t\twaitToReceivedFrame = false;\r\n\t\tu32\t\t\t\t\t\t\t\tsmoothStepFrames = 0;\r\n\t\tu32\t\t\t\t\t\t\t\tsourceQuality = 100;\r\n\t\tu32\t\t\t\t\t\t\t\tsourceScale = 100;\r\n\t} SSProfile;\r\n\t//----------------------------------------------------------------------\r\n\tbool\t\t\t\t\t\t\t\tSS_v_isStartStreaming = false;\r\n\tbool\t\t\t\t\t\t\t\tSS_setting_waitToReceivedFrame = true;\r\n\tu32\t\t\t\t\t\t\t\t\tSS_setting_smoothStepFrames = 1;\t\t// this setting allow frame switch smoother if there is delay when received frame\r\n\tu32\t\t\t\t\t\t\t\t\tSS_setting_sourceQuality = 50;\t\t\t// webp quality control\r\n\tu32\t\t\t\t\t\t\t\t\tSS_setting_sourceScale = 75;\t\t\t// frame size control eg: 75% = 0.75 of real size\r\n\t//----------------------------------------------------------------------\r\n\t// on each frame - each session store only 1 piece as a piece frame object\r\n\tstd::map<u32, FramePiece*>\t\t\tSS_framePiecesCached;\r\n\tstd::mutex\t\t\t\t\t\t\tSS_frameCachedMutex;\r\nprivate:\r\n\r\n\r\npublic:\r\n\tvoid\t\t\t\t\t\t\t\tSS_StartStream();\r\n\tvoid\t\t\t\t\t\t\t\tSS_StopStream();\r\n\tvoid\t\t\t\t\t\t\t\tSS_ChangeSetting();\r\n\r\n\tvoid\t\t\t\t\t\t\t\tSS_Reset();\r\n\tFramePiece*\t\t\t\t\t\t\tSafeGetFramePiece(u32 index);\r\n\tvoid\t\t\t\t\t\t\t\tRequestForheader();\r\n\t//-----------------------------------------------------\r\n\t// movie\r\n\t//-----------------------------------------------------\r\n\r\n\r\n\t//-----------------------------------------------------\r\n\t// input\r\n\t//-----------------------------------------------------\r\n};\r\n\r\n#endif"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject/PPSessionManager.cpp",
    "content": "#include \"PPSessionManager.h\"\r\n\r\n\r\n\r\n\r\nPPSessionManager::PPSessionManager()\r\n{\r\n}\r\n\r\n\r\nPPSessionManager::~PPSessionManager()\r\n{\r\n}\r\n\r\nvoid PPSessionManager::InitScreenCapture(u32 numberOfSessions)\r\n{\r\n\tm_decoder = new PPDecoder();\r\n\tm_decoder->startDecodeThread();\r\n\r\n\tif (numberOfSessions <= 0) numberOfSessions = 1;\r\n\tif (m_screenCaptureSessions.size() > 0) return;\r\n\tm_screenCaptureSessions = std::vector<PPSession*>();\r\n\tfor(int i = 0; i < numberOfSessions; i++)\r\n\t{\r\n\t\tPPSession* session = new PPSession();\r\n\t\tsession->sessionID = i;\r\n\t\tsession->InitScreenCaptureSession(this);\r\n\t\tm_screenCaptureSessions.push_back(session);\r\n\t}\r\n}\r\n\r\nvoid PPSessionManager::StartStreaming(const char* ip, const char* port)\r\n{\r\n\tif (m_staticVideoBuffer == nullptr)\r\n\t{\r\n\t\tm_staticVideoBuffer = (u8*)malloc(VideoBufferSize);\r\n\t\tm_videoBufferSize = 0;\r\n\t\tm_videoBufferCursor = 0;\r\n\t}\r\n\t//==============================================\r\n\r\n\r\n\tm_currentDisplayFrame = 0;\r\n\tm_frameTracker.clear();\r\n\tm_connectedSession = 0;\r\n\tfor (int i = 0; i < m_screenCaptureSessions.size(); i++)\r\n\t{\r\n\t\t// start connect all session to server\r\n\t\tm_screenCaptureSessions[i]->StartSession(ip, port, [=](u8* data, u32 code)\r\n\t\t{\r\n\t\t\tm_connectedSession++;\r\n\t\t\tif(m_connectedSession == m_screenCaptureSessions.size())\r\n\t\t\t{\r\n\t\t\t\t// when all session is authenticated\r\n\t\t\t\t// start streaming here\r\n\t\t\t\tstd::cout << \"All sessions is connected to server : Start Streaming\" << std::endl;\r\n\t\t\t\t_startStreaming();\r\n\t\t\t}\r\n\t\t});\r\n\t}\r\n}\r\n\r\nvoid PPSessionManager::StopStreaming()\r\n{\r\n\tfor (int i = 0; i < m_screenCaptureSessions.size(); i++)\r\n\t{\r\n\t\tm_screenCaptureSessions[i]->SS_StopStream();\r\n\t}\r\n}\r\n\r\nvoid PPSessionManager::Close()\r\n{\r\n\tfor (int i = 0; i < m_screenCaptureSessions.size(); i++)\r\n\t{\r\n\t\tm_screenCaptureSessions[i]->CloseSession();\r\n\t}\r\n}\r\n\r\nvoid PPSessionManager::AppendBuffer(u8* buffer, u32 size)\r\n{\r\n\tm_decoder->appendBuffer(buffer, size);\r\n\t//memmove(m_staticVideoBuffer + m_videoBufferSize, buffer, size);\r\n\t//m_videoBufferSize += size;\r\n}\r\n\r\nvoid PPSessionManager::DecodeVideo()\r\n{\r\n}\r\n\r\nvoid PPSessionManager::_startStreaming()\r\n{\r\n\tm_screenCaptureSessions[0]->SS_ChangeSetting();\r\n\tm_screenCaptureSessions[0]->SS_StartStream();\r\n\tfor (int i = 1; i < m_screenCaptureSessions.size(); i++)\r\n\t{\r\n\t\tm_screenCaptureSessions[i]->RequestForheader();\r\n\t}\r\n}\r\n"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject/PPSessionManager.h",
    "content": "#pragma once\r\n#include <vector>\r\n#include \"PPSession.h\"\r\n#include <webp/decode.h>\r\n#include \"opusfile.h\"\r\n#include \"PPDecoder.h\"\r\n\r\n#ifdef _WIN32 \r\n//===========================================================\r\n// for test only\r\n// openCV\r\n#include <opencv2/core.hpp>\r\n#include <opencv2/imgcodecs.hpp>\r\n#include <opencv2/highgui.hpp>\r\n#include <opencv2/opencv.hpp>\r\n//===========================================================\r\n#endif\r\n\r\n#define VideoBufferSize  0xA00000\r\n\r\nclass PPSessionManager\r\n{\r\nprivate:\r\n\tPPDecoder*\t\t\t\t\t\t\t\t\t\tm_decoder;\r\n\tstd::vector<PPSession*>\t\t\t\t\t\t\tm_screenCaptureSessions;\r\n\tint\t\t\t\t\t\t\t\t\t\t\t\tm_commandSessionIndex = 0;\r\n\tu32\t\t\t\t\t\t\t\t\t\t\t\tm_connectedSession = 0;\r\n\tvoid\t\t\t\t\t\t\t\t\t\t\t_startStreaming();\r\n\tstd::map<u32, u32>\t\t\t\t\t\t\t\tm_frameTracker;\r\n\tstd::mutex\t\t\t\t\t\t\t\t\t\tm_frameTrackerMutex;\r\n\tu32\t\t\t\t\t\t\t\t\t\t\t\tm_currentDisplayFrame = 0;\r\npublic:\r\n\tPPSessionManager();\r\n\t~PPSessionManager();\r\n\r\n\tu8* m_staticVideoBuffer = nullptr;\r\n\tu32 m_videoBufferCursor = 0;\r\n\tu32 m_videoBufferSize = 0;\r\n\r\n\r\n\tvoid InitScreenCapture(u32 numberOfSessions);\r\n\tvoid StartStreaming(const char* ip, const char* port);\r\n\tvoid StopStreaming();\r\n\tvoid Close();\r\n\r\n\tvoid AppendBuffer(u8* buffer, u32 size);\r\n\r\n\tvoid DecodeVideo();\r\n};\r\n\r\n"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject/PinBoxTestProject.vcxproj",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\r\n<Project DefaultTargets=\"Build\" ToolsVersion=\"14.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\r\n  <ItemGroup Label=\"ProjectConfigurations\">\r\n    <ProjectConfiguration Include=\"Debug|Win32\">\r\n      <Configuration>Debug</Configuration>\r\n      <Platform>Win32</Platform>\r\n    </ProjectConfiguration>\r\n    <ProjectConfiguration Include=\"Release|Win32\">\r\n      <Configuration>Release</Configuration>\r\n      <Platform>Win32</Platform>\r\n    </ProjectConfiguration>\r\n    <ProjectConfiguration Include=\"Debug|x64\">\r\n      <Configuration>Debug</Configuration>\r\n      <Platform>x64</Platform>\r\n    </ProjectConfiguration>\r\n    <ProjectConfiguration Include=\"Release|x64\">\r\n      <Configuration>Release</Configuration>\r\n      <Platform>x64</Platform>\r\n    </ProjectConfiguration>\r\n  </ItemGroup>\r\n  <PropertyGroup Label=\"Globals\">\r\n    <ProjectGuid>{901BAB37-7D47-4095-9D2A-F203382FDA88}</ProjectGuid>\r\n    <RootNamespace>PinBoxTestProject</RootNamespace>\r\n    <WindowsTargetPlatformVersion>10.0.10586.0</WindowsTargetPlatformVersion>\r\n  </PropertyGroup>\r\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.Default.props\" />\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\" Label=\"Configuration\">\r\n    <ConfigurationType>Application</ConfigurationType>\r\n    <UseDebugLibraries>true</UseDebugLibraries>\r\n    <PlatformToolset>v140</PlatformToolset>\r\n    <CharacterSet>MultiByte</CharacterSet>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\" Label=\"Configuration\">\r\n    <ConfigurationType>Application</ConfigurationType>\r\n    <UseDebugLibraries>false</UseDebugLibraries>\r\n    <PlatformToolset>v140</PlatformToolset>\r\n    <WholeProgramOptimization>true</WholeProgramOptimization>\r\n    <CharacterSet>MultiByte</CharacterSet>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\" Label=\"Configuration\">\r\n    <ConfigurationType>Application</ConfigurationType>\r\n    <UseDebugLibraries>true</UseDebugLibraries>\r\n    <PlatformToolset>v140</PlatformToolset>\r\n    <CharacterSet>MultiByte</CharacterSet>\r\n  </PropertyGroup>\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\" Label=\"Configuration\">\r\n    <ConfigurationType>Application</ConfigurationType>\r\n    <UseDebugLibraries>false</UseDebugLibraries>\r\n    <PlatformToolset>v140</PlatformToolset>\r\n    <WholeProgramOptimization>true</WholeProgramOptimization>\r\n    <CharacterSet>MultiByte</CharacterSet>\r\n  </PropertyGroup>\r\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.props\" />\r\n  <ImportGroup Label=\"ExtensionSettings\">\r\n  </ImportGroup>\r\n  <ImportGroup Label=\"Shared\">\r\n  </ImportGroup>\r\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\r\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\r\n  </ImportGroup>\r\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\r\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\r\n  </ImportGroup>\r\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\r\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\r\n  </ImportGroup>\r\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\r\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\r\n  </ImportGroup>\r\n  <PropertyGroup Label=\"UserMacros\" />\r\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\r\n    <IncludePath>$(VC_IncludePath);$(WindowsSDK_IncludePath);E:\\3ds\\PinBoxStreaming\\ThirdParty\\opus-1.2.1\\include;E:\\3ds\\PinBoxStreaming\\ThirdParty\\opusfile-0.9\\include</IncludePath>\r\n  </PropertyGroup>\r\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\r\n    <ClCompile>\r\n      <WarningLevel>Level3</WarningLevel>\r\n      <Optimization>Disabled</Optimization>\r\n      <SDLCheck>true</SDLCheck>\r\n      <PreprocessorDefinitions>_CRT_SECURE_NO_WARNINGS;_STDC_CONSTANT_MACROS;_MBCS;%(PreprocessorDefinitions)</PreprocessorDefinitions>\r\n    </ClCompile>\r\n  </ItemDefinitionGroup>\r\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\r\n    <ClCompile>\r\n      <WarningLevel>Level3</WarningLevel>\r\n      <Optimization>Disabled</Optimization>\r\n      <SDLCheck>true</SDLCheck>\r\n    </ClCompile>\r\n  </ItemDefinitionGroup>\r\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\r\n    <ClCompile>\r\n      <WarningLevel>Level3</WarningLevel>\r\n      <Optimization>MaxSpeed</Optimization>\r\n      <FunctionLevelLinking>true</FunctionLevelLinking>\r\n      <IntrinsicFunctions>true</IntrinsicFunctions>\r\n      <SDLCheck>true</SDLCheck>\r\n    </ClCompile>\r\n    <Link>\r\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\r\n      <OptimizeReferences>true</OptimizeReferences>\r\n    </Link>\r\n  </ItemDefinitionGroup>\r\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\r\n    <ClCompile>\r\n      <WarningLevel>Level3</WarningLevel>\r\n      <Optimization>MaxSpeed</Optimization>\r\n      <FunctionLevelLinking>true</FunctionLevelLinking>\r\n      <IntrinsicFunctions>true</IntrinsicFunctions>\r\n      <SDLCheck>true</SDLCheck>\r\n    </ClCompile>\r\n    <Link>\r\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\r\n      <OptimizeReferences>true</OptimizeReferences>\r\n    </Link>\r\n  </ItemDefinitionGroup>\r\n  <ItemGroup>\r\n    <ClCompile Include=\"main.cpp\" />\r\n    <ClCompile Include=\"PPDecoder.cpp\" />\r\n    <ClCompile Include=\"PPMessage.cpp\" />\r\n    <ClCompile Include=\"PPNetwork.cpp\" />\r\n    <ClCompile Include=\"PPSession.cpp\" />\r\n    <ClCompile Include=\"PPSessionManager.cpp\" />\r\n    <ClCompile Include=\"yuv_rgb.c\" />\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <ClInclude Include=\"PPDecoder.h\" />\r\n    <ClInclude Include=\"PPMessage.h\" />\r\n    <ClInclude Include=\"PPNetwork.h\" />\r\n    <ClInclude Include=\"PPSession.h\" />\r\n    <ClInclude Include=\"PPSessionManager.h\" />\r\n    <ClInclude Include=\"yuv_rgb.h\" />\r\n  </ItemGroup>\r\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.targets\" />\r\n  <ImportGroup Label=\"ExtensionTargets\">\r\n  </ImportGroup>\r\n</Project>"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject/PinBoxTestProject.vcxproj.filters",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\r\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\r\n  <ItemGroup>\r\n    <Filter Include=\"Source Files\">\r\n      <UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>\r\n      <Extensions>cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx</Extensions>\r\n    </Filter>\r\n    <Filter Include=\"Resource Files\">\r\n      <UniqueIdentifier>{67DA6AB6-F800-4c08-8B7A-83BB121AAD01}</UniqueIdentifier>\r\n      <Extensions>rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms</Extensions>\r\n    </Filter>\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <ClCompile Include=\"PPMessage.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"PPNetwork.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"PPSession.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"main.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"PPSessionManager.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"PPDecoder.cpp\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n    <ClCompile Include=\"yuv_rgb.c\">\r\n      <Filter>Source Files</Filter>\r\n    </ClCompile>\r\n  </ItemGroup>\r\n  <ItemGroup>\r\n    <ClInclude Include=\"PPMessage.h\">\r\n      <Filter>Source Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"PPNetwork.h\">\r\n      <Filter>Source Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"PPSession.h\">\r\n      <Filter>Source Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"PPSessionManager.h\">\r\n      <Filter>Source Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"PPDecoder.h\">\r\n      <Filter>Source Files</Filter>\r\n    </ClInclude>\r\n    <ClInclude Include=\"yuv_rgb.h\">\r\n      <Filter>Source Files</Filter>\r\n    </ClInclude>\r\n  </ItemGroup>\r\n</Project>"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject/main.cpp",
    "content": "\r\n#include <iostream>\r\n#include \"PPSessionManager.h\"\r\n#define __STDC_CONSTANT_MACROS\r\n\r\nvoid updateSessionManager(void* arg)\r\n{\r\n\tPPSessionManager* sm = (PPSessionManager*)arg;\r\n\tif( sm != nullptr)\r\n\t{\r\n\t\twhile(true)\r\n\t\t{\r\n\t\t\tsm->DecodeVideo();\r\n\t\t}\r\n\t}\r\n}\r\n\r\n\r\nint main()\r\n{\r\n\r\n#ifdef _WIN32\r\n\tWSADATA wsaData;\r\n\tWSAStartup(MAKEWORD(2, 2), &wsaData);\r\n#endif\r\n\r\n\tstd::cout << \"==============================================================\\n\";\r\n\tstd::cout << \"= Test application for PinBox. Simulation client side (3ds)  =\\n\";\r\n\tstd::cout << \"==============================================================\\n\";\r\n\tPPSessionManager* sm = new PPSessionManager();\r\n\tsm->InitScreenCapture(1);\r\n\r\n\tstd::thread g_thread = std::thread(updateSessionManager, sm);\r\n\tg_thread.detach();\r\n\r\n\tstd::cout << \"Press Q to exit.\\nInput: \";\r\n\tchar input;\r\n\twhile(true)\r\n\t{\r\n\t\tstd::cin >> input;\r\n\t\tif (input == 'q') break;\r\n\t\tif(input == 'a')\r\n\t\t{\r\n\t\t\tsm->StartStreaming(\"192.168.31.183\", \"1234\");\r\n\t\t}\r\n\t\tif (input == 's')\r\n\t\t{\r\n\t\t\tsm->StopStreaming();\r\n\t\t}\r\n\t\tinput = ' ';\r\n\t\t//-----------------------------------------\r\n\t}\r\n\tstd::cout << \"\\nClosing session.\\n\";\r\n\tsm->Close();\r\n\treturn 0;\r\n}\r\n"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject/yuv_rgb.c",
    "content": "// Copyright 2016 Adrien Descamps\n// Distributed under BSD 3-Clause License\n\n#include \"yuv_rgb.h\"\n\n//#include <x86intrin.h>\n\n#include <stdio.h>\n\n\nuint8_t clamp(int16_t value)\n{\n\treturn value<0 ? 0 : (value>255 ? 255 : value);\n}\n\n// Definitions\n//\n// E'R, E'G, E'B, E'Y, E'Cb and E'Cr refer to the analog signals\n// E'R, E'G, E'B and E'Y range is [0:1], while E'Cb and E'Cr range is [-0.5:0.5]\n// R, G, B, Y, Cb and Cr refer to the digitalized values\n// The digitalized values can use their full range ([0:255] for 8bit values),\n// or a subrange (typically [16:235] for Y and [16:240] for CbCr).\n// We assume here that RGB range is always [0:255], since it is the case for \n// most digitalized images.\n// For 8bit values :\n// * Y = round((YMax-YMin)*E'Y + YMin)\n// * Cb = round((CbRange)*E'Cb + 128)\n// * Cr = round((CrRange)*E'Cr + 128)\n// Where *Min and *Max are the range of each channel\n//\n// In the analog domain , the RGB to YCbCr transformation is defined as:\n// * E'Y = Rf*E'R + Gf*E'G + Bf*E'B\n// Where Rf, Gf and Bf are constants defined in each standard, with \n// Rf + Gf + Bf = 1 (necessary to ensure that E'Y range is [0:1])\n// * E'Cb = (E'B - E'Y) / CbNorm\n// * E'Cr = (E'R - E'Y) / CrNorm\n// Where CbNorm and CrNorm are constants, dependent of Rf, Gf, Bf, computed \n// to normalize to a [-0.5:0.5] range : CbNorm=2*(1-Bf) and CrNorm=2*(1-Rf)\n//\n// Algorithms\n//\n// Most operations will be made in a fixed point format for speed, using \n// N bits of precision. In next section the [x] convention is used for \n// a fixed point rounded value, that is (int being the c type conversion)\n// * [x] = int(x*(2^N)+0.5)\n// N can be different for each factor, we simply use the highest value\n// that will not overflow in 16 bits intermediate variables.\n//.\n// For RGB to YCbCr conversion, we start by generating a pseudo Y value \n// (noted Y') in fixed point format, using the full range for now.\n// * Y' = ([Rf]*R + [Gf]*G + [Bf]*B)>>N\n// We can then compute Cb and Cr by\n// * Cb = ((B - Y')*[CbRange/(255*CbNorm)])>>N + 128\n// * Cr = ((R - Y')*[CrRange/(255*CrNorm)])>>N + 128\n// And finally, we normalize Y to its digital range\n// * Y = (Y'*[(YMax-YMin)/255])>>N + YMin\n// \n// For YCbCr to RGB conversion, we first compute the full range Y' value :\n// * Y' = ((Y-YMin)*[255/(YMax-YMin)])>>N\n// We can then compute B and R values by :\n// * B = ((Cb-128)*[(255*CbNorm)/CbRange])>>N + Y'\n// * R = ((Cr-128)*[(255*CrNorm)/CrRange])>>N + Y'\n// And finally, for G we know that:\n// * G = (Y' - (Rf*R + Bf*B)) / Gf\n// From above:\n// * G = (Y' - Rf * ((Cr-128)*(255*CrNorm)/CrRange + Y') - Bf * ((Cb-128)*(255*CbNorm)/CbRange + Y')) / Gf\n// Since 1-Rf-Bf=Gf, we can take Y' out of the division by Gf, and we get:\n// * G = Y' - (Cr-128)*Rf/Gf*(255*CrNorm)/CrRange - (Cb-128)*Bf/Gf*(255*CbNorm)/CbRange\n// That we can compute, with fixed point arithmetic, by\n// * G = Y' - ((Cr-128)*[Rf/Gf*(255*CrNorm)/CrRange] + (Cb-128)*[Bf/Gf*(255*CbNorm)/CbRange])>>N\n// \n// Note : in ITU-T T.871(JPEG), Y=Y', so that part could be optimized out\n\n\n#define FIXED_POINT_VALUE(value, precision) ((int)(((value)*(1<<precision))+0.5))\n\n// see above for description\ntypedef struct\n{\n\tuint8_t r_factor;    // [Rf]\n\tuint8_t g_factor;    // [Rg]\n\tuint8_t b_factor;    // [Rb]\n\tuint8_t cb_factor;   // [CbRange/(255*CbNorm)]\n\tuint8_t cr_factor;   // [CrRange/(255*CrNorm)]\n\tuint8_t y_factor;    // [(YMax-YMin)/255]\n\tuint8_t y_offset;    // YMin\n} RGB2YUVParam;\n\ntypedef struct\n{\n\tuint8_t cb_factor;   // [(255*CbNorm)/CbRange]\n\tuint8_t cr_factor;   // [(255*CrNorm)/CrRange]\n\tuint8_t g_cb_factor; // [Bf/Gf*(255*CbNorm)/CbRange]\n\tuint8_t g_cr_factor; // [Rf/Gf*(255*CrNorm)/CrRange]\n\tuint8_t y_factor;    // [(YMax-YMin)/255]\n\tuint8_t y_offset;    // YMin\n} YUV2RGBParam;\n\n#define RGB2YUV_PARAM(Rf, Bf, YMin, YMax, CbCrRange) \\\n{.r_factor=FIXED_POINT_VALUE(Rf, 8), \\\n.g_factor=256-FIXED_POINT_VALUE(Rf, 8)-FIXED_POINT_VALUE(Bf, 8), \\\n.b_factor=FIXED_POINT_VALUE(Bf, 8), \\\n.cb_factor=FIXED_POINT_VALUE((CbCrRange/255.0)/(2.0*(1-Bf)), 8), \\\n.cr_factor=FIXED_POINT_VALUE((CbCrRange/255.0)/(2.0*(1-Rf)), 8), \\\n.y_factor=FIXED_POINT_VALUE((YMax-YMin)/255.0, 7), \\\n.y_offset=YMin}\n\n#define YUV2RGB_PARAM(Rf, Bf, YMin, YMax, CbCrRange) \\\n{.cb_factor=FIXED_POINT_VALUE(255.0*(2.0*(1-Bf))/CbCrRange, 6), \\\n.cr_factor=FIXED_POINT_VALUE(255.0*(2.0*(1-Rf))/CbCrRange, 6), \\\n.g_cb_factor=FIXED_POINT_VALUE(Bf/(1.0-Bf-Rf)*255.0*(2.0*(1-Bf))/CbCrRange, 7), \\\n.g_cr_factor=FIXED_POINT_VALUE(Rf/(1.0-Bf-Rf)*255.0*(2.0*(1-Rf))/CbCrRange, 7), \\\n.y_factor=FIXED_POINT_VALUE(255.0/(YMax-YMin), 7), \\\n.y_offset=YMin}\n\nstatic const RGB2YUVParam RGB2YUV[3] = {\n\t// ITU-T T.871 (JPEG)\n\tRGB2YUV_PARAM(0.299, 0.114, 0.0, 255.0, 255.0),\n\t// ITU-R BT.601-7\n\tRGB2YUV_PARAM(0.299, 0.114, 16.0, 235.0, 224.0),\n\t// ITU-R BT.709-6\n\tRGB2YUV_PARAM(0.2126, 0.0722, 16.0, 235.0, 224.0)\n};\n\nstatic const YUV2RGBParam YUV2RGB[3] = {\n\t// ITU-T T.871 (JPEG)\n\tYUV2RGB_PARAM(0.299, 0.114, 0.0, 255.0, 255.0),\n\t// ITU-R BT.601-7\n\tYUV2RGB_PARAM(0.299, 0.114, 16.0, 235.0, 224.0),\n\t// ITU-R BT.709-6\n\tYUV2RGB_PARAM(0.2126, 0.0722, 16.0, 235.0, 224.0)\n};\n\n\nvoid rgb24_yuv420_std(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *RGB, uint32_t RGB_stride, \n\tuint8_t *Y, uint8_t *U, uint8_t *V, uint32_t Y_stride, uint32_t UV_stride, \n\tYCbCrType yuv_type)\n{\n\tconst RGB2YUVParam *const param = &(RGB2YUV[yuv_type]);\n\t\n\tuint32_t x, y;\n\tfor(y=0; y<(height-1); y+=2)\n\t{\n\t\tconst uint8_t *rgb_ptr1=RGB+y*RGB_stride,\n\t\t\t*rgb_ptr2=RGB+(y+1)*RGB_stride;\n\t\t\n\t\tuint8_t *y_ptr1=Y+y*Y_stride,\n\t\t\t*y_ptr2=Y+(y+1)*Y_stride,\n\t\t\t*u_ptr=U+(y/2)*UV_stride,\n\t\t\t*v_ptr=V+(y/2)*UV_stride;\n\t\t\n\t\tfor(x=0; x<(width-1); x+=2)\n\t\t{\n\t\t\t// compute yuv for the four pixels, u and v values are summed\n\t\t\tuint8_t y_tmp;\n\t\t\tint16_t u_tmp, v_tmp;\n\t\t\t\n\t\t\ty_tmp = (param->r_factor*rgb_ptr1[0] + param->g_factor*rgb_ptr1[1] + param->b_factor*rgb_ptr1[2])>>8;\n\t\t\tu_tmp = rgb_ptr1[2]-y_tmp;\n\t\t\tv_tmp = rgb_ptr1[0]-y_tmp;\n\t\t\ty_ptr1[0]=((y_tmp*param->y_factor)>>7) + param->y_offset;\n\t\t\t\n\t\t\ty_tmp = (param->r_factor*rgb_ptr1[3] + param->g_factor*rgb_ptr1[4] + param->b_factor*rgb_ptr1[5])>>8;\n\t\t\tu_tmp += rgb_ptr1[5]-y_tmp;\n\t\t\tv_tmp += rgb_ptr1[3]-y_tmp;\n\t\t\ty_ptr1[1]=((y_tmp*param->y_factor)>>7) + param->y_offset;\n\n\t\t\ty_tmp = (param->r_factor*rgb_ptr2[0] + param->g_factor*rgb_ptr2[1] + param->b_factor*rgb_ptr2[2])>>8;\n\t\t\tu_tmp += rgb_ptr2[2]-y_tmp;\n\t\t\tv_tmp += rgb_ptr2[0]-y_tmp;\n\t\t\ty_ptr2[0]=((y_tmp*param->y_factor)>>7) + param->y_offset;\n\t\t\t\n\t\t\ty_tmp = (param->r_factor*rgb_ptr2[3] + param->g_factor*rgb_ptr2[4] + param->b_factor*rgb_ptr2[5])>>8;\n\t\t\tu_tmp += rgb_ptr2[5]-y_tmp;\n\t\t\tv_tmp += rgb_ptr2[3]-y_tmp;\n\t\t\ty_ptr2[1]=((y_tmp*param->y_factor)>>7) + param->y_offset;\n\n\t\t\tu_ptr[0] = (((u_tmp>>2)*param->cb_factor)>>8) + 128;\n\t\t\tv_ptr[0] = (((v_tmp>>2)*param->cb_factor)>>8) + 128;\n\t\t\t\n\t\t\trgb_ptr1 += 6;\n\t\t\trgb_ptr2 += 6;\n\t\t\ty_ptr1 += 2;\n\t\t\ty_ptr2 += 2;\n\t\t\tu_ptr += 1;\n\t\t\tv_ptr += 1;\n\t\t}\n\t}\n}\n\n\nvoid yuv420_rgb24_std(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *Y, const uint8_t *U, const uint8_t *V, uint32_t Y_stride, uint32_t UV_stride, \n\tuint8_t *RGB, uint32_t RGB_stride, \n\tYCbCrType yuv_type)\n{\n\tconst YUV2RGBParam *const param = &(YUV2RGB[yuv_type]);\n\tuint32_t x, y;\n\tfor(y=0; y<(height-1); y+=2)\n\t{\n\t\tconst uint8_t *y_ptr1=Y+y*Y_stride,\n\t\t\t*y_ptr2=Y+(y+1)*Y_stride,\n\t\t\t*u_ptr=U+(y/2)*UV_stride,\n\t\t\t*v_ptr=V+(y/2)*UV_stride;\n\t\t\n\t\tuint8_t *rgb_ptr1=RGB+y*RGB_stride,\n\t\t\t*rgb_ptr2=RGB+(y+1)*RGB_stride;\n\t\t\n\t\tfor(x=0; x<(width-1); x+=2)\n\t\t{\n\t\t\tint8_t u_tmp, v_tmp;\n\t\t\tu_tmp = u_ptr[0]-128;\n\t\t\tv_tmp = v_ptr[0]-128;\n\t\t\t\n\t\t\t//compute Cb Cr color offsets, common to four pixels\n\t\t\tint16_t b_cb_offset, r_cr_offset, g_cbcr_offset;\n\t\t\tb_cb_offset = (param->cb_factor*u_tmp)>>6;\n\t\t\tr_cr_offset = (param->cr_factor*v_tmp)>>6;\n\t\t\tg_cbcr_offset = (param->g_cb_factor*u_tmp + param->g_cr_factor*v_tmp)>>7;\n\t\t\t\n\t\t\tint16_t y_tmp;\n\t\t\ty_tmp = (param->y_factor*(y_ptr1[0]-param->y_offset))>>7;\n\t\t\trgb_ptr1[0] = clamp(y_tmp + r_cr_offset);\n\t\t\trgb_ptr1[1] = clamp(y_tmp - g_cbcr_offset);\n\t\t\trgb_ptr1[2] = clamp(y_tmp + b_cb_offset);\n\t\t\t\n\t\t\ty_tmp = (param->y_factor*(y_ptr1[1]-param->y_offset))>>7;\n\t\t\trgb_ptr1[3] = clamp(y_tmp + r_cr_offset);\n\t\t\trgb_ptr1[4] = clamp(y_tmp - g_cbcr_offset);\n\t\t\trgb_ptr1[5] = clamp(y_tmp + b_cb_offset);\n\t\t\t\n\t\t\ty_tmp = (param->y_factor*(y_ptr2[0]-param->y_offset))>>7;\n\t\t\trgb_ptr2[0] = clamp(y_tmp + r_cr_offset);\n\t\t\trgb_ptr2[1] = clamp(y_tmp - g_cbcr_offset);\n\t\t\trgb_ptr2[2] = clamp(y_tmp + b_cb_offset);\n\t\t\t\n\t\t\ty_tmp = (param->y_factor*(y_ptr2[1]-param->y_offset))>>7;\n\t\t\trgb_ptr2[3] = clamp(y_tmp + r_cr_offset);\n\t\t\trgb_ptr2[4] = clamp(y_tmp - g_cbcr_offset);\n\t\t\trgb_ptr2[5] = clamp(y_tmp + b_cb_offset);\n\t\t\t\n\t\t\trgb_ptr1 += 6;\n\t\t\trgb_ptr2 += 6;\n\t\t\ty_ptr1 += 2;\n\t\t\ty_ptr2 += 2;\n\t\t\tu_ptr += 1;\n\t\t\tv_ptr += 1;\n\t\t}\n\t}\n}\n\n#ifdef __SSE2__\n\n//see rgb.txt\n#define UNPACK_RGB24_32_STEP(RS1, RS2, RS3, RS4, RS5, RS6, RD1, RD2, RD3, RD4, RD5, RD6) \\\nRD1 = _mm_unpacklo_epi8(RS1, RS4); \\\nRD2 = _mm_unpackhi_epi8(RS1, RS4); \\\nRD3 = _mm_unpacklo_epi8(RS2, RS5); \\\nRD4 = _mm_unpackhi_epi8(RS2, RS5); \\\nRD5 = _mm_unpacklo_epi8(RS3, RS6); \\\nRD6 = _mm_unpackhi_epi8(RS3, RS6);\n\n#define RGB2YUV_16(R, G, B, Y, U, V) \\\nY = _mm_add_epi16(_mm_mullo_epi16(R, _mm_set1_epi16(param->r_factor)), \\\n                  _mm_mullo_epi16(G, _mm_set1_epi16(param->g_factor))); \\\nY = _mm_add_epi16(Y, _mm_mullo_epi16(B, _mm_set1_epi16(param->b_factor))); \\\nY = _mm_srli_epi16(Y, 8); \\\nU = _mm_mullo_epi16(_mm_sub_epi16(B, Y), _mm_set1_epi16(param->cb_factor)); \\\nU = _mm_add_epi16(_mm_srai_epi16(U, 8), _mm_set1_epi16(128)); \\\nV = _mm_mullo_epi16(_mm_sub_epi16(R, Y), _mm_set1_epi16(param->cr_factor)); \\\nV = _mm_add_epi16(_mm_srai_epi16(V, 8), _mm_set1_epi16(128)); \\\nY = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(Y, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset));\n\n#define RGB2YUV_32 \\\n\t__m128i r_16, g_16, b_16; \\\n\t__m128i y1_16, y2_16, cb1_16, cb2_16, cr1_16, cr2_16, Y, cb, cr; \\\n\t__m128i tmp1, tmp2, tmp3, tmp4, tmp5, tmp6; \\\n\t__m128i rgb1 = LOAD_SI128((const __m128i*)(rgb_ptr1)), \\\n\t\trgb2 = LOAD_SI128((const __m128i*)(rgb_ptr1+16)), \\\n\t\trgb3 = LOAD_SI128((const __m128i*)(rgb_ptr1+32)), \\\n\t\trgb4 = LOAD_SI128((const __m128i*)(rgb_ptr2)), \\\n\t\trgb5 = LOAD_SI128((const __m128i*)(rgb_ptr2+16)), \\\n\t\trgb6 = LOAD_SI128((const __m128i*)(rgb_ptr2+32)); \\\n\t/* unpack rgb24 data to r, g and b data in separate channels*/ \\\n\t/* see rgb.txt to get an idea of the algorithm, note that we only go to the next to last step*/ \\\n\t/* here, because averaging in horizontal direction is easier like this*/ \\\n\t/* The last step is applied further on the Y channel only*/ \\\n\tUNPACK_RGB24_32_STEP(rgb1, rgb2, rgb3, rgb4, rgb5, rgb6, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6) \\\n\tUNPACK_RGB24_32_STEP(tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, rgb1, rgb2, rgb3, rgb4, rgb5, rgb6) \\\n\tUNPACK_RGB24_32_STEP(rgb1, rgb2, rgb3, rgb4, rgb5, rgb6, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6) \\\n\tUNPACK_RGB24_32_STEP(tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, rgb1, rgb2, rgb3, rgb4, rgb5, rgb6) \\\n\t/* first compute Y', (B-Y') and (R-Y'), in 16bits values, for the first line */ \\\n\t/* Y is saved for each pixel, while only sums of (B-Y') and (R-Y') for pairs of adjacents pixels are saved*/ \\\n\tr_16 = _mm_unpacklo_epi8(rgb1, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpacklo_epi8(rgb2, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpacklo_epi8(rgb3, _mm_setzero_si128()); \\\n\ty1_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty1_16 = _mm_add_epi16(y1_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty1_16 = _mm_srli_epi16(y1_16, 8); \\\n\tcb1_16 = _mm_sub_epi16(b_16, y1_16); \\\n\tcr1_16 = _mm_sub_epi16(r_16, y1_16); \\\n\tr_16 = _mm_unpacklo_epi8(rgb4, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpacklo_epi8(rgb5, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpacklo_epi8(rgb6, _mm_setzero_si128()); \\\n\ty2_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty2_16 = _mm_add_epi16(y2_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty2_16 = _mm_srli_epi16(y2_16, 8); \\\n\tcb1_16 = _mm_add_epi16(cb1_16, _mm_sub_epi16(b_16, y2_16)); \\\n\tcr1_16 = _mm_add_epi16(cr1_16, _mm_sub_epi16(r_16, y2_16)); \\\n\t/* Rescale Y' to Y, pack it to 8bit values and save it */ \\\n\ty1_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y1_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\ty2_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y2_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\tY = _mm_packus_epi16(y1_16, y2_16); \\\n\tY = _mm_unpackhi_epi8(_mm_slli_si128(Y, 8), Y); \\\n\tSAVE_SI128((__m128i*)(y_ptr1), Y); \\\n\t/* same for the second line, compute Y', (B-Y') and (R-Y'), in 16bits values */ \\\n\t/* Y is saved for each pixel, while only sums of (B-Y') and (R-Y') for pairs of adjacents pixels are added to the previous values*/ \\\n\tr_16 = _mm_unpackhi_epi8(rgb1, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpackhi_epi8(rgb2, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpackhi_epi8(rgb3, _mm_setzero_si128()); \\\n\ty1_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty1_16 = _mm_add_epi16(y1_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty1_16 = _mm_srli_epi16(y1_16, 8); \\\n\tcb1_16 = _mm_add_epi16(cb1_16, _mm_sub_epi16(b_16, y1_16)); \\\n\tcr1_16 = _mm_add_epi16(cr1_16, _mm_sub_epi16(r_16, y1_16)); \\\n\tr_16 = _mm_unpackhi_epi8(rgb4, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpackhi_epi8(rgb5, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpackhi_epi8(rgb6, _mm_setzero_si128()); \\\n\ty2_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty2_16 = _mm_add_epi16(y2_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty2_16 = _mm_srli_epi16(y2_16, 8); \\\n\tcb1_16 = _mm_add_epi16(cb1_16, _mm_sub_epi16(b_16, y2_16)); \\\n\tcr1_16 = _mm_add_epi16(cr1_16, _mm_sub_epi16(r_16, y2_16)); \\\n\t/* Rescale Y' to Y, pack it to 8bit values and save it */ \\\n\ty1_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y1_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\ty2_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y2_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\tY = _mm_packus_epi16(y1_16, y2_16); \\\n\tY = _mm_unpackhi_epi8(_mm_slli_si128(Y, 8), Y); \\\n\tSAVE_SI128((__m128i*)(y_ptr2), Y); \\\n\t/* Rescale Cb and Cr to their final range */ \\\n\tcb1_16 = _mm_add_epi16(_mm_srai_epi16(_mm_mullo_epi16(_mm_srai_epi16(cb1_16, 2), _mm_set1_epi16(param->cb_factor)), 8), _mm_set1_epi16(128)); \\\n\tcr1_16 = _mm_add_epi16(_mm_srai_epi16(_mm_mullo_epi16(_mm_srai_epi16(cr1_16, 2), _mm_set1_epi16(param->cr_factor)), 8), _mm_set1_epi16(128)); \\\n\t\\\n\t/* do the same again with next data */ \\\n\trgb1 = LOAD_SI128((const __m128i*)(rgb_ptr1+48)), \\\n\trgb2 = LOAD_SI128((const __m128i*)(rgb_ptr1+64)), \\\n\trgb3 = LOAD_SI128((const __m128i*)(rgb_ptr1+80)), \\\n\trgb4 = LOAD_SI128((const __m128i*)(rgb_ptr2+48)), \\\n\trgb5 = LOAD_SI128((const __m128i*)(rgb_ptr2+64)), \\\n\trgb6 = LOAD_SI128((const __m128i*)(rgb_ptr2+80)); \\\n\t/* unpack rgb24 data to r, g and b data in separate channels*/ \\\n\t/* see rgb.txt to get an idea of the algorithm, note that we only go to the next to last step*/ \\\n\t/* here, because averaging in horizontal direction is easier like this*/ \\\n\t/* The last step is applied further on the Y channel only*/ \\\n\tUNPACK_RGB24_32_STEP(rgb1, rgb2, rgb3, rgb4, rgb5, rgb6, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6) \\\n\tUNPACK_RGB24_32_STEP(tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, rgb1, rgb2, rgb3, rgb4, rgb5, rgb6) \\\n\tUNPACK_RGB24_32_STEP(rgb1, rgb2, rgb3, rgb4, rgb5, rgb6, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6) \\\n\tUNPACK_RGB24_32_STEP(tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, rgb1, rgb2, rgb3, rgb4, rgb5, rgb6) \\\n\t/* first compute Y', (B-Y') and (R-Y'), in 16bits values, for the first line */ \\\n\t/* Y is saved for each pixel, while only sums of (B-Y') and (R-Y') for pairs of adjacents pixels are saved*/ \\\n\tr_16 = _mm_unpacklo_epi8(rgb1, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpacklo_epi8(rgb2, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpacklo_epi8(rgb3, _mm_setzero_si128()); \\\n\ty1_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty1_16 = _mm_add_epi16(y1_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty1_16 = _mm_srli_epi16(y1_16, 8); \\\n\tcb2_16 = _mm_sub_epi16(b_16, y1_16); \\\n\tcr2_16 = _mm_sub_epi16(r_16, y1_16); \\\n\tr_16 = _mm_unpacklo_epi8(rgb4, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpacklo_epi8(rgb5, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpacklo_epi8(rgb6, _mm_setzero_si128()); \\\n\ty2_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty2_16 = _mm_add_epi16(y2_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty2_16 = _mm_srli_epi16(y2_16, 8); \\\n\tcb2_16 = _mm_add_epi16(cb2_16, _mm_sub_epi16(b_16, y2_16)); \\\n\tcr2_16 = _mm_add_epi16(cr2_16, _mm_sub_epi16(r_16, y2_16)); \\\n\t/* Rescale Y' to Y, pack it to 8bit values and save it */ \\\n\ty1_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y1_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\ty2_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y2_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\tY = _mm_packus_epi16(y1_16, y2_16); \\\n\tY = _mm_unpackhi_epi8(_mm_slli_si128(Y, 8), Y); \\\n\tSAVE_SI128((__m128i*)(y_ptr1+16), Y); \\\n\t/* same for the second line, compute Y', (B-Y') and (R-Y'), in 16bits values */ \\\n\t/* Y is saved for each pixel, while only sums of (B-Y') and (R-Y') for pairs of adjacents pixels are added to the previous values*/ \\\n\tr_16 = _mm_unpackhi_epi8(rgb1, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpackhi_epi8(rgb2, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpackhi_epi8(rgb3, _mm_setzero_si128()); \\\n\ty1_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty1_16 = _mm_add_epi16(y1_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty1_16 = _mm_srli_epi16(y1_16, 8); \\\n\tcb2_16 = _mm_add_epi16(cb2_16, _mm_sub_epi16(b_16, y1_16)); \\\n\tcr2_16 = _mm_add_epi16(cr2_16, _mm_sub_epi16(r_16, y1_16)); \\\n\tr_16 = _mm_unpackhi_epi8(rgb4, _mm_setzero_si128()); \\\n\tg_16 = _mm_unpackhi_epi8(rgb5, _mm_setzero_si128()); \\\n\tb_16 = _mm_unpackhi_epi8(rgb6, _mm_setzero_si128()); \\\n\ty2_16 = _mm_add_epi16(_mm_mullo_epi16(r_16, _mm_set1_epi16(param->r_factor)), \\\n\t\t_mm_mullo_epi16(g_16, _mm_set1_epi16(param->g_factor))); \\\n\ty2_16 = _mm_add_epi16(y2_16, _mm_mullo_epi16(b_16, _mm_set1_epi16(param->b_factor))); \\\n\ty2_16 = _mm_srli_epi16(y2_16, 8); \\\n\tcb2_16 = _mm_add_epi16(cb2_16, _mm_sub_epi16(b_16, y2_16)); \\\n\tcr2_16 = _mm_add_epi16(cr2_16, _mm_sub_epi16(r_16, y2_16)); \\\n\t/* Rescale Y' to Y, pack it to 8bit values and save it */ \\\n\ty1_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y1_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\ty2_16 = _mm_add_epi16(_mm_srli_epi16(_mm_mullo_epi16(y2_16, _mm_set1_epi16(param->y_factor)), 7), _mm_set1_epi16(param->y_offset)); \\\n\tY = _mm_packus_epi16(y1_16, y2_16); \\\n\tY = _mm_unpackhi_epi8(_mm_slli_si128(Y, 8), Y); \\\n\tSAVE_SI128((__m128i*)(y_ptr2+16), Y); \\\n\t/* Rescale Cb and Cr to their final range */ \\\n\tcb2_16 = _mm_add_epi16(_mm_srai_epi16(_mm_mullo_epi16(_mm_srai_epi16(cb2_16, 2), _mm_set1_epi16(param->cb_factor)), 8), _mm_set1_epi16(128)); \\\n\tcr2_16 = _mm_add_epi16(_mm_srai_epi16(_mm_mullo_epi16(_mm_srai_epi16(cr2_16, 2), _mm_set1_epi16(param->cr_factor)), 8), _mm_set1_epi16(128)); \\\n\t/* Pack and save Cb Cr */ \\\n\tcb = _mm_packus_epi16(cb1_16, cb2_16); \\\n\tcr = _mm_packus_epi16(cr1_16, cr2_16); \\\n\tSAVE_SI128((__m128i*)(u_ptr), cb); \\\n\tSAVE_SI128((__m128i*)(v_ptr), cr);\n\n\nvoid rgb24_yuv420_sse(uint32_t width, uint32_t height, \n\tconst uint8_t *RGB, uint32_t RGB_stride, \n\tuint8_t *Y, uint8_t *U, uint8_t *V, uint32_t Y_stride, uint32_t UV_stride, \n\tYCbCrType yuv_type)\n{\n\t#define LOAD_SI128 _mm_load_si128\n\t#define SAVE_SI128 _mm_stream_si128\n\tconst RGB2YUVParam *const param = &(RGB2YUV[yuv_type]);\n\t\n\tuint32_t x, y;\n\tfor(y=0; y<(height-1); y+=2)\n\t{\n\t\tconst uint8_t *rgb_ptr1=RGB+y*RGB_stride,\n\t\t\t*rgb_ptr2=RGB+(y+1)*RGB_stride;\n\t\t\n\t\tuint8_t *y_ptr1=Y+y*Y_stride,\n\t\t\t*y_ptr2=Y+(y+1)*Y_stride,\n\t\t\t*u_ptr=U+(y/2)*UV_stride,\n\t\t\t*v_ptr=V+(y/2)*UV_stride;\n\t\t\n\t\tfor(x=0; x<(width-31); x+=32)\n\t\t{\n\t\t\tRGB2YUV_32\n\t\t\t\n\t\t\trgb_ptr1+=96;\n\t\t\trgb_ptr2+=96;\n\t\t\ty_ptr1+=32;\n\t\t\ty_ptr2+=32;\n\t\t\tu_ptr+=16; \n\t\t\tv_ptr+=16;\n\t\t}\n\t}\n\t#undef LOAD_SI128\n\t#undef SAVE_SI128\n}\n\nvoid rgb24_yuv420_sseu(uint32_t width, uint32_t height, \n\tconst uint8_t *RGB, uint32_t RGB_stride, \n\tuint8_t *Y, uint8_t *U, uint8_t *V, uint32_t Y_stride, uint32_t UV_stride, \n\tYCbCrType yuv_type)\n{\n\t#define LOAD_SI128 _mm_loadu_si128\n\t#define SAVE_SI128 _mm_storeu_si128\n\tconst RGB2YUVParam *const param = &(RGB2YUV[yuv_type]);\n\t\n\tuint32_t x, y;\n\tfor(y=0; y<(height-1); y+=2)\n\t{\n\t\tconst uint8_t *rgb_ptr1=RGB+y*RGB_stride,\n\t\t\t*rgb_ptr2=RGB+(y+1)*RGB_stride;\n\t\t\n\t\tuint8_t *y_ptr1=Y+y*Y_stride,\n\t\t\t*y_ptr2=Y+(y+1)*Y_stride,\n\t\t\t*u_ptr=U+(y/2)*UV_stride,\n\t\t\t*v_ptr=V+(y/2)*UV_stride;\n\t\t\n\t\tfor(x=0; x<(width-31); x+=32)\n\t\t{\n\t\t\tRGB2YUV_32\n\t\t\t\n\t\t\trgb_ptr1+=96;\n\t\t\trgb_ptr2+=96;\n\t\t\ty_ptr1+=32;\n\t\t\ty_ptr2+=32;\n\t\t\tu_ptr+=16; \n\t\t\tv_ptr+=16;\n\t\t}\n\t}\n\t#undef LOAD_SI128\n\t#undef SAVE_SI128\n}\n#endif\n\n#ifdef __SSE2__\n\n#define UV2RGB_16(U,V,R1,G1,B1,R2,G2,B2) \\\n\tr_tmp = _mm_srai_epi16(_mm_mullo_epi16(V, _mm_set1_epi16(param->cr_factor)), 6); \\\n\tg_tmp = _mm_srai_epi16(_mm_add_epi16( \\\n\t\t_mm_mullo_epi16(U, _mm_set1_epi16(param->g_cb_factor)), \\\n\t\t_mm_mullo_epi16(V, _mm_set1_epi16(param->g_cr_factor))), 7); \\\n\tb_tmp = _mm_srai_epi16(_mm_mullo_epi16(U, _mm_set1_epi16(param->cb_factor)), 6); \\\n\tR1 = _mm_unpacklo_epi16(r_tmp, r_tmp); \\\n\tG1 = _mm_unpacklo_epi16(g_tmp, g_tmp); \\\n\tB1 = _mm_unpacklo_epi16(b_tmp, b_tmp); \\\n\tR2 = _mm_unpackhi_epi16(r_tmp, r_tmp); \\\n\tG2 = _mm_unpackhi_epi16(g_tmp, g_tmp); \\\n\tB2 = _mm_unpackhi_epi16(b_tmp, b_tmp); \\\n\n#define ADD_Y2RGB_16(Y1,Y2,R1,G1,B1,R2,G2,B2) \\\n\tY1 = _mm_srai_epi16(_mm_mullo_epi16(Y1, _mm_set1_epi16(param->y_factor)), 7); \\\n\tY2 = _mm_srai_epi16(_mm_mullo_epi16(Y2, _mm_set1_epi16(param->y_factor)), 7); \\\n\t\\\n\tR1 = _mm_add_epi16(Y1, R1); \\\n\tG1 = _mm_sub_epi16(Y1, G1); \\\n\tB1 = _mm_add_epi16(Y1, B1); \\\n\tR2 = _mm_add_epi16(Y2, R2); \\\n\tG2 = _mm_sub_epi16(Y2, G2); \\\n\tB2 = _mm_add_epi16(Y2, B2); \\\n\n#define PACK_RGB24_32_STEP(RS1, RS2, RS3, RS4, RS5, RS6, RD1, RD2, RD3, RD4, RD5, RD6) \\\nRD1 = _mm_packus_epi16(_mm_and_si128(RS1,_mm_set1_epi16(0xFF)), _mm_and_si128(RS2,_mm_set1_epi16(0xFF))); \\\nRD2 = _mm_packus_epi16(_mm_and_si128(RS3,_mm_set1_epi16(0xFF)), _mm_and_si128(RS4,_mm_set1_epi16(0xFF))); \\\nRD3 = _mm_packus_epi16(_mm_and_si128(RS5,_mm_set1_epi16(0xFF)), _mm_and_si128(RS6,_mm_set1_epi16(0xFF))); \\\nRD4 = _mm_packus_epi16(_mm_srli_epi16(RS1,8), _mm_srli_epi16(RS2,8)); \\\nRD5 = _mm_packus_epi16(_mm_srli_epi16(RS3,8), _mm_srli_epi16(RS4,8)); \\\nRD6 = _mm_packus_epi16(_mm_srli_epi16(RS5,8), _mm_srli_epi16(RS6,8)); \\\n\n#define PACK_RGB24_32(R1, R2, G1, G2, B1, B2, RGB1, RGB2, RGB3, RGB4, RGB5, RGB6) \\\nPACK_RGB24_32_STEP(R1, R2, G1, G2, B1, B2, RGB1, RGB2, RGB3, RGB4, RGB5, RGB6) \\\nPACK_RGB24_32_STEP(RGB1, RGB2, RGB3, RGB4, RGB5, RGB6, R1, R2, G1, G2, B1, B2) \\\nPACK_RGB24_32_STEP(R1, R2, G1, G2, B1, B2, RGB1, RGB2, RGB3, RGB4, RGB5, RGB6) \\\nPACK_RGB24_32_STEP(RGB1, RGB2, RGB3, RGB4, RGB5, RGB6, R1, R2, G1, G2, B1, B2) \\\nPACK_RGB24_32_STEP(R1, R2, G1, G2, B1, B2, RGB1, RGB2, RGB3, RGB4, RGB5, RGB6) \\\n\n\n#define YUV2RGB_32 \\\n\t__m128i r_tmp, g_tmp, b_tmp; \\\n\t__m128i r_16_1, g_16_1, b_16_1, r_16_2, g_16_2, b_16_2; \\\n\t__m128i r_uv_16_1, g_uv_16_1, b_uv_16_1, r_uv_16_2, g_uv_16_2, b_uv_16_2; \\\n\t__m128i y_16_1, y_16_2; \\\n\t\\\n\t__m128i u = LOAD_SI128((const __m128i*)(u_ptr)); \\\n\t__m128i v = LOAD_SI128((const __m128i*)(v_ptr)); \\\n\tu = _mm_add_epi8(u, _mm_set1_epi8(-128)); \\\n\tv = _mm_add_epi8(v, _mm_set1_epi8(-128)); \\\n\t\\\n\t/* process first 16 pixels of first line */\\\n\t__m128i u_16 = _mm_srai_epi16(_mm_unpacklo_epi8(u, u), 8); \\\n\t__m128i v_16 = _mm_srai_epi16(_mm_unpacklo_epi8(v, v), 8); \\\n\t\\\n\tUV2RGB_16(u_16, v_16, r_uv_16_1, g_uv_16_1, b_uv_16_1, r_uv_16_2, g_uv_16_2, b_uv_16_2) \\\n\tr_16_1=r_uv_16_1; g_16_1=g_uv_16_1; b_16_1=b_uv_16_1; \\\n\tr_16_2=r_uv_16_2; g_16_2=g_uv_16_2; b_16_2=b_uv_16_2; \\\n\t\\\n\t__m128i y = LOAD_SI128((const __m128i*)(y_ptr1)); \\\n\ty = _mm_sub_epi8(y, _mm_set1_epi8(param->y_offset)); \\\n\ty_16_1 = _mm_unpacklo_epi8(y, _mm_setzero_si128()); \\\n\ty_16_2 = _mm_unpackhi_epi8(y, _mm_setzero_si128()); \\\n\t\\\n\tADD_Y2RGB_16(y_16_1, y_16_2, r_16_1, g_16_1, b_16_1, r_16_2, g_16_2, b_16_2) \\\n\t\\\n\t__m128i r_8_11 = _mm_packus_epi16(r_16_1, r_16_2); \\\n\t__m128i g_8_11 = _mm_packus_epi16(g_16_1, g_16_2); \\\n\t__m128i b_8_11 = _mm_packus_epi16(b_16_1, b_16_2); \\\n\t\\\n\t/* process first 16 pixels of second line */\\\n\tr_16_1=r_uv_16_1; g_16_1=g_uv_16_1; b_16_1=b_uv_16_1; \\\n\tr_16_2=r_uv_16_2; g_16_2=g_uv_16_2; b_16_2=b_uv_16_2; \\\n\t\\\n\ty = LOAD_SI128((const __m128i*)(y_ptr2)); \\\n\ty = _mm_sub_epi8(y, _mm_set1_epi8(param->y_offset)); \\\n\ty_16_1 = _mm_unpacklo_epi8(y, _mm_setzero_si128()); \\\n\ty_16_2 = _mm_unpackhi_epi8(y, _mm_setzero_si128()); \\\n\t\\\n\tADD_Y2RGB_16(y_16_1, y_16_2, r_16_1, g_16_1, b_16_1, r_16_2, g_16_2, b_16_2) \\\n\t\\\n\t__m128i r_8_21 = _mm_packus_epi16(r_16_1, r_16_2); \\\n\t__m128i g_8_21 = _mm_packus_epi16(g_16_1, g_16_2); \\\n\t__m128i b_8_21 = _mm_packus_epi16(b_16_1, b_16_2); \\\n\t\\\n\t/* process last 16 pixels of first line */\\\n\tu_16 = _mm_srai_epi16(_mm_unpackhi_epi8(u, u), 8); \\\n\tv_16 = _mm_srai_epi16(_mm_unpackhi_epi8(v, v), 8); \\\n\t\\\n\tUV2RGB_16(u_16, v_16, r_uv_16_1, g_uv_16_1, b_uv_16_1, r_uv_16_2, g_uv_16_2, b_uv_16_2) \\\n\tr_16_1=r_uv_16_1; g_16_1=g_uv_16_1; b_16_1=b_uv_16_1; \\\n\tr_16_2=r_uv_16_2; g_16_2=g_uv_16_2; b_16_2=b_uv_16_2; \\\n\t\\\n\ty = LOAD_SI128((const __m128i*)(y_ptr1+16)); \\\n\ty = _mm_sub_epi8(y, _mm_set1_epi8(param->y_offset)); \\\n\ty_16_1 = _mm_unpacklo_epi8(y, _mm_setzero_si128()); \\\n\ty_16_2 = _mm_unpackhi_epi8(y, _mm_setzero_si128()); \\\n\t\\\n\tADD_Y2RGB_16(y_16_1, y_16_2, r_16_1, g_16_1, b_16_1, r_16_2, g_16_2, b_16_2) \\\n\t\\\n\t__m128i r_8_12 = _mm_packus_epi16(r_16_1, r_16_2); \\\n\t__m128i g_8_12 = _mm_packus_epi16(g_16_1, g_16_2); \\\n\t__m128i b_8_12 = _mm_packus_epi16(b_16_1, b_16_2); \\\n\t\\\n\t/* process last 16 pixels of second line */\\\n\tr_16_1=r_uv_16_1; g_16_1=g_uv_16_1; b_16_1=b_uv_16_1; \\\n\tr_16_2=r_uv_16_2; g_16_2=g_uv_16_2; b_16_2=b_uv_16_2; \\\n\t\\\n\ty = LOAD_SI128((const __m128i*)(y_ptr2+16)); \\\n\ty = _mm_sub_epi8(y, _mm_set1_epi8(param->y_offset)); \\\n\ty_16_1 = _mm_unpacklo_epi8(y, _mm_setzero_si128()); \\\n\ty_16_2 = _mm_unpackhi_epi8(y, _mm_setzero_si128()); \\\n\t\\\n\tADD_Y2RGB_16(y_16_1, y_16_2, r_16_1, g_16_1, b_16_1, r_16_2, g_16_2, b_16_2) \\\n\t\\\n\t__m128i r_8_22 = _mm_packus_epi16(r_16_1, r_16_2); \\\n\t__m128i g_8_22 = _mm_packus_epi16(g_16_1, g_16_2); \\\n\t__m128i b_8_22 = _mm_packus_epi16(b_16_1, b_16_2); \\\n\t\\\n\t__m128i rgb_1, rgb_2, rgb_3, rgb_4, rgb_5, rgb_6; \\\n\t\\\n\tPACK_RGB24_32(r_8_11, r_8_12, g_8_11, g_8_12, b_8_11, b_8_12, rgb_1, rgb_2, rgb_3, rgb_4, rgb_5, rgb_6) \\\n\tSAVE_SI128((__m128i*)(rgb_ptr1), rgb_1); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr1+16), rgb_2); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr1+32), rgb_3); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr1+48), rgb_4); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr1+64), rgb_5); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr1+80), rgb_6); \\\n\t\\\n\tPACK_RGB24_32(r_8_21, r_8_22, g_8_21, g_8_22, b_8_21, b_8_22, rgb_1, rgb_2, rgb_3, rgb_4, rgb_5, rgb_6) \\\n\tSAVE_SI128((__m128i*)(rgb_ptr2), rgb_1); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr2+16), rgb_2); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr2+32), rgb_3); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr2+48), rgb_4); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr2+64), rgb_5); \\\n\tSAVE_SI128((__m128i*)(rgb_ptr2+80), rgb_6); \\\n\n\nvoid yuv420_rgb24_sse(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *Y, const uint8_t *U, const uint8_t *V, uint32_t Y_stride, uint32_t UV_stride, \n\tuint8_t *RGB, uint32_t RGB_stride, \n\tYCbCrType yuv_type)\n{\n\t#define LOAD_SI128 _mm_load_si128\n\t#define SAVE_SI128 _mm_stream_si128\n\tconst YUV2RGBParam *const param = &(YUV2RGB[yuv_type]);\n\t\n\tuint32_t x, y;\n\tfor(y=0; y<(height-1); y+=2)\n\t{\n\t\tconst uint8_t *y_ptr1=Y+y*Y_stride,\n\t\t\t*y_ptr2=Y+(y+1)*Y_stride,\n\t\t\t*u_ptr=U+(y/2)*UV_stride,\n\t\t\t*v_ptr=V+(y/2)*UV_stride;\n\t\t\n\t\tuint8_t *rgb_ptr1=RGB+y*RGB_stride,\n\t\t\t*rgb_ptr2=RGB+(y+1)*RGB_stride;\n\t\t\n\t\tfor(x=0; x<(width-31); x+=32)\n\t\t{\n\t\t\tYUV2RGB_32\n\t\t\t\n\t\t\ty_ptr1+=32;\n\t\t\ty_ptr2+=32;\n\t\t\tu_ptr+=16; \n\t\t\tv_ptr+=16;\n\t\t\trgb_ptr1+=96;\n\t\t\trgb_ptr2+=96;\n\t\t}\n\t}\n\t#undef LOAD_SI128\n\t#undef SAVE_SI128\n}\n\nvoid yuv420_rgb24_sseu(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *Y, const uint8_t *U, const uint8_t *V, uint32_t Y_stride, uint32_t UV_stride, \n\tuint8_t *RGB, uint32_t RGB_stride, \n\tYCbCrType yuv_type)\n{\n\t#define LOAD_SI128 _mm_loadu_si128\n\t#define SAVE_SI128 _mm_storeu_si128\n\tconst YUV2RGBParam *const param = &(YUV2RGB[yuv_type]);\n\t\n\tuint32_t x, y;\n\tfor(y=0; y<(height-1); y+=2)\n\t{\n\t\tconst uint8_t *y_ptr1=Y+y*Y_stride,\n\t\t\t*y_ptr2=Y+(y+1)*Y_stride,\n\t\t\t*u_ptr=U+(y/2)*UV_stride,\n\t\t\t*v_ptr=V+(y/2)*UV_stride;\n\t\t\n\t\tuint8_t *rgb_ptr1=RGB+y*RGB_stride,\n\t\t\t*rgb_ptr2=RGB+(y+1)*RGB_stride;\n\t\t\n\t\tfor(x=0; x<(width-31); x+=32)\n\t\t{\n\t\t\tYUV2RGB_32\n\t\t\t\n\t\t\ty_ptr1+=32;\n\t\t\ty_ptr2+=32;\n\t\t\tu_ptr+=16; \n\t\t\tv_ptr+=16;\n\t\t\trgb_ptr1+=96;\n\t\t\trgb_ptr2+=96;\n\t\t}\n\t}\n\t#undef LOAD_SI128\n\t#undef SAVE_SI128\n}\n\n\n#endif //__SSE2__\n"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject/yuv_rgb.h",
    "content": "// Copyright 2016 Adrien Descamps\n// Distributed under BSD 3-Clause License\n\n// Provide optimized functions to convert images from 8bits yuv420 to rgb24 format\n\n// There are a few slightly different variations of the YCbCr color space with different parameters that \n// change the conversion matrix.\n// The three most common YCbCr color space, defined by BT.601, BT.709 and JPEG standard are implemented here.\n// See the respective standards for details\n// The matrix values used are derived from http://www.equasys.de/colorconversion.html\n\n// YUV420 is stored as three separate channels, with U and V (Cb and Cr) subsampled by a 2 factor\n// For conversion from yuv to rgb, no interpolation is done, and the same UV value are used for 4 rgb pixels. This \n// is suboptimal for image quality, but by far the fastest method.\n\n// For all methods, width and height should be even, if not, the last row/column of the result image won't be affected.\n// For sse methods, if the width if not divisable by 32, the last (width%32) pixels of each line won't be affected.\n\n#include <stdint.h>\n\ntypedef enum\n{\n\tYCBCR_JPEG,\n\tYCBCR_601,\n\tYCBCR_709\n} YCbCrType;\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n// yuv to rgb, standard c implementation\nvoid yuv420_rgb24_std(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *y, const uint8_t *u, const uint8_t *v, uint32_t y_stride, uint32_t uv_stride, \n\tuint8_t *rgb, uint32_t rgb_stride, \n\tYCbCrType yuv_type);\n\n// yuv to rgb, sse implementation\n// pointers must be 16 byte aligned, and strides must be divisable by 16\nvoid yuv420_rgb24_sse(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *y, const uint8_t *u, const uint8_t *v, uint32_t y_stride, uint32_t uv_stride, \n\tuint8_t *rgb, uint32_t rgb_stride, \n\tYCbCrType yuv_type);\n\n// yuv to rgb, sse implementation\n// pointers do not need to be 16 byte aligned\nvoid yuv420_rgb24_sseu(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *y, const uint8_t *u, const uint8_t *v, uint32_t y_stride, uint32_t uv_stride, \n\tuint8_t *rgb, uint32_t rgb_stride, \n\tYCbCrType yuv_type);\n\n\n\n// rgb to yuv, standard c implementation\nvoid rgb24_yuv420_std(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *rgb, uint32_t rgb_stride, \n\tuint8_t *y, uint8_t *u, uint8_t *v, uint32_t y_stride, uint32_t uv_stride, \n\tYCbCrType yuv_type);\n\n// rgb to yuv, sse implementation\n// pointers must be 16 byte aligned, and strides must be divisible by 16\nvoid rgb24_yuv420_sse(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *rgb, uint32_t rgb_stride, \n\tuint8_t *y, uint8_t *u, uint8_t *v, uint32_t y_stride, uint32_t uv_stride, \n\tYCbCrType yuv_type);\n\n// rgb to yuv, sse implementation\n// pointers do not need to be 16 byte aligned\nvoid rgb24_yuv420_sseu(\n\tuint32_t width, uint32_t height, \n\tconst uint8_t *rgb, uint32_t rgb_stride, \n\tuint8_t *y, uint8_t *u, uint8_t *v, uint32_t y_stride, uint32_t uv_stride, \n\tYCbCrType yuv_type);\n\n#ifdef __cplusplus\n}\n#endif\n"
  },
  {
    "path": "PinBoxTestProject/PinBoxTestProject.sln",
    "content": "﻿\r\nMicrosoft Visual Studio Solution File, Format Version 12.00\r\n# Visual Studio 14\r\nVisualStudioVersion = 14.0.25420.1\r\nMinimumVisualStudioVersion = 10.0.40219.1\r\nProject(\"{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}\") = \"PinBoxTestProject\", \"PinBoxTestProject\\PinBoxTestProject.vcxproj\", \"{901BAB37-7D47-4095-9D2A-F203382FDA88}\"\r\nEndProject\r\nGlobal\r\n\tGlobalSection(SolutionConfigurationPlatforms) = preSolution\r\n\t\tDebug|x64 = Debug|x64\r\n\t\tDebug|x86 = Debug|x86\r\n\t\tRelease|x64 = Release|x64\r\n\t\tRelease|x86 = Release|x86\r\n\tEndGlobalSection\r\n\tGlobalSection(ProjectConfigurationPlatforms) = postSolution\r\n\t\t{901BAB37-7D47-4095-9D2A-F203382FDA88}.Debug|x64.ActiveCfg = Debug|x64\r\n\t\t{901BAB37-7D47-4095-9D2A-F203382FDA88}.Debug|x64.Build.0 = Debug|x64\r\n\t\t{901BAB37-7D47-4095-9D2A-F203382FDA88}.Debug|x86.ActiveCfg = Debug|Win32\r\n\t\t{901BAB37-7D47-4095-9D2A-F203382FDA88}.Debug|x86.Build.0 = Debug|Win32\r\n\t\t{901BAB37-7D47-4095-9D2A-F203382FDA88}.Release|x64.ActiveCfg = Release|x64\r\n\t\t{901BAB37-7D47-4095-9D2A-F203382FDA88}.Release|x64.Build.0 = Release|x64\r\n\t\t{901BAB37-7D47-4095-9D2A-F203382FDA88}.Release|x86.ActiveCfg = Release|Win32\r\n\t\t{901BAB37-7D47-4095-9D2A-F203382FDA88}.Release|x86.Build.0 = Release|Win32\r\n\tEndGlobalSection\r\n\tGlobalSection(SolutionProperties) = preSolution\r\n\t\tHideSolutionNode = FALSE\r\n\tEndGlobalSection\r\nEndGlobal\r\n"
  },
  {
    "path": "README.md",
    "content": "\r\n[![N|Solid](https://cdn.discordapp.com/attachments/340110838947905538/398531319048699905/test.png)](https://github.com/namkazt/PinBox)\r\n\r\nWelcome to Pinbox! Pinbox is a homebrew application (Soon .cia) for the Nintendo 3DS that streams content from your Windows PC to the 3DS. Keep in mind, Pinbox is currenty in alpha, so bugs will occur! Contact Namkazt on the Pinbox Discord for help. \r\n\r\nhttps://discord.gg/CpNpMdG\r\n\r\n# Current Support\r\n- Streaming from Windows PC to a 3DS ( or over internet from VPS windows server )\r\n- Audio support (MP2 encode/decode)\r\n- Hardware acceleration Y2R\r\n- Emulation Xbox 360 Controller for awesome game support ( by ViGEm )\r\n- Support for Keyboard mapping with profile select from 3DS side\r\n- Realtime config from 3DS side\r\n\r\n#### Plans\r\n-  Implement Qt UI for basic use\r\n-  add Hub UI for fast access to game or app\r\n - Checker for wifi and sleepmode and other events relating to 3DS\r\n# Requirements to get Pinbox to run:\r\n* Visual C++ Redistributable for Visual Studio 2015\r\nhttps://goo.gl/ijdZ1x\r\n- Xbox 360 Accessories Software 1.2 (contains the missing device drivers)\r\nhttps://goo.gl/xPK8qE\r\n\r\n- Make sure Windows is up to date with the latest security patches and updates\r\n- Install the Virtual Game pad Emulation Framework\r\nhttps://goo.gl/qcuVbp\r\n- Keep in mind: The requirements to Pinbox may change, please check the #how-to-setup section of The Pinbox Discord Server first.\r\n\r\n#### Notes\r\n- Enable firewall to allow port 1234 in and out (or disable the firewall when using the software and enable the firewall when done)\r\n- Make sure both devices are connected to Wifi\r\n- You do not have to type in port 1234, this will crash the app\r\n- If you are getting a black screen in Pinbox, open server.cfg in the Pinbox server directory and change monitor index to zero\r\n# Installation\r\n\r\nTutorial Video (Thanks to @GameInCanada): https://www.youtube.com/watch?v=Q-R2cy-vBgY\r\n# Troubleshooting\r\n\r\nPlease follow the instructions in the Pinbox Discord since the troubleshooting requirements are changing all the time and will be updated constantly.\r\n\r\n## If you are having issues don't hesitate to ask for help on our official Discord channel! \r\n## https://discord.gg/CpNpMdG\r\n"
  },
  {
    "path": "ThirdParty/FakeInput/LICENSE",
    "content": "MIT license\n-----------\n\nCopyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/actions/action.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_ACTION_HPP\n#define FI_ACTION_HPP\n\nnamespace FakeInput\n{\n    /** Abstract class representing arbitrary action which can be sent. */\n    class Action\n    {\n    public:\n        /** Creates new copy of this action.\n         *\n         * @returns\n         *     New copy of this action allocated on heap.\n         */\n        virtual Action* clone() const = 0;\n\n        /** Performs the action. */\n        virtual void send() const = 0;\n    };\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/actions/actions.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"actionsequence.hpp\"\n#include \"commandaction.hpp\"\n#include \"keyaction.hpp\"\n#include \"mouseaction.hpp\"\n#include \"waitaction.hpp\"\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/actions/actionsequence.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"actionsequence.hpp\"\n\n#include <iostream>\n\nnamespace FakeInput\n{\n    ActionSequence::ActionSequence()\n    {\n    }\n\n    ActionSequence::ActionSequence(const ActionSequence& anotherSequence)\n    {\n        ActionList::const_iterator it;\n        for(it = anotherSequence.actions_.begin(); it != anotherSequence.actions_.end(); ++it)\n        {\n            actions_.push_back((*it)->clone());\n        }\n    }\n\n    ActionSequence::~ActionSequence()\n    {\n        ActionList::iterator it;\n        for(it = actions_.begin(); it != actions_.end(); ++it)\n        {\n            delete (*it);\n        }\n    }\n\n    Action* ActionSequence::clone() const\n    {\n        return new ActionSequence(*this);\n    }\n\n    ActionSequence& ActionSequence::join(ActionSequence& anotherSequence)\n    {\n        actions_.push_back(anotherSequence.clone());\n        return *this;\n    }\n\n    ActionSequence& ActionSequence::press(Key key)\n    {\n        actions_.push_back(new KeyboardPress(key));\n        return *this;\n    }\n\n    ActionSequence& ActionSequence::press(KeyType type)\n    {\n        actions_.push_back(new KeyboardPress(type));\n        return *this;\n    }\n\n    ActionSequence& ActionSequence::press(MouseButton button)\n    {\n        actions_.push_back(new MousePress(button));\n        return *this;\n    }\n\n    ActionSequence& ActionSequence::release(Key key)\n    {\n        actions_.push_back(new KeyboardRelease(key));\n        return *this;\n    }\n\n    ActionSequence& ActionSequence::release(KeyType type)\n    {\n        actions_.push_back(new KeyboardRelease(type));\n        return *this;\n    }\n\n    ActionSequence& ActionSequence::release(MouseButton button)\n    {\n        actions_.push_back(new MouseRelease(button));\n        return *this;\n    }\n\n    ActionSequence& ActionSequence::moveMouse(int dx, int dy)\n    {\n        actions_.push_back(new MouseRelativeMotion(dx, dy));\n        return *this;\n    }\n\n    ActionSequence& ActionSequence::moveMouseTo(int x, int y)\n    {\n        actions_.push_back(new MouseAbsoluteMotion(x, y));\n        return *this;\n    }\n\n    ActionSequence& ActionSequence::wheelUp()\n    {\n        actions_.push_back(new MouseWheelUp());\n        return *this;\n    }\n\n    ActionSequence& ActionSequence::wheelDown()\n    {\n        actions_.push_back(new MouseWheelDown());\n        return *this;\n    }\n\n    ActionSequence& ActionSequence::runCommand(const std::string& cmd)\n    {\n        actions_.push_back(new CommandRun(cmd));\n        return *this;\n    }\n\n    ActionSequence& ActionSequence::wait(unsigned int milisec)\n    {\n        actions_.push_back(new Wait(milisec));\n        return *this;\n    }\n\n    void ActionSequence::send() const\n    {\n        ActionList::const_iterator it;\n        for(it = actions_.begin(); it != actions_.end(); ++it)\n        {\n            (*it)->send();\n        }\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/actions/actionsequence.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_ACTIONSEQUENCE_HPP\n#define FI_ACTIONSEQUENCE_HPP\n\n#include \"action.hpp\"\n\n#include \"keyaction.hpp\"\n#include \"mouseaction.hpp\"\n#include \"commandaction.hpp\"\n#include \"waitaction.hpp\"\n\n#include <list>\n\nnamespace FakeInput\n{\n    /** Represents a sequence of actions. */\n    class ActionSequence : public Action\n    {\n        typedef std::list<Action*> ActionList;\n\n    public:\n        /** ActionSequence constructor. */\n        ActionSequence();\n\n        /** ActionSequence copy-constructor. */\n        ActionSequence(const ActionSequence& anotherSequence);\n\n        /** ActionSequence destructor. */\n        virtual ~ActionSequence();\n\n        /** Creates new copy of this action sequence.\n         *\n         * @returns\n         *     New copy of this action sequence allocated on heap.\n         */\n        Action* clone() const;\n\n        /** Adds another sequence to the end of this sequence.\n         *\n         * So all actions from another sequence will be performed\n         * after all action before it in this sequence\n         *\n         * @param anotherSequence\n         *     Another sequence to join\n         *\n         * @returns\n         *     The referece to this action\n         */\n        ActionSequence& join(ActionSequence& anotherSequence);\n\n        /** Adds key press action to the end of this sequence.\n         *\n         * @param key\n         *     The key to press\n         *\n         * @returns\n         *     The referece to this action\n         */\n        ActionSequence& press(Key key);\n\n        /** Adds key press action to the end of this sequence.\n         *\n         * @param keyType\n         *     The keyType of key to press\n         *\n         * @returns\n         *     The referece to this action\n         */\n        ActionSequence& press(KeyType keyType);\n\n        /** Adds mouse press action to the end of this sequence.\n         *\n         * @param button\n         *     The mouse button to press\n         *\n         * @returns\n         *     The referece to this action\n         */\n        ActionSequence& press(MouseButton button);\n\n        /** Adds key release action to the end of this sequence.\n         *\n         * @param key\n         *     The key to release\n         *\n         * @returns\n         *     The referece to this action\n         */\n        ActionSequence& release(Key key);\n\n        /** Adds key release action to the end of this sequence.\n         *\n         * @param keyType\n         *     The keyType of key to release\n         *\n         * @returns\n         *     The referece to this action\n         */\n        ActionSequence& release(KeyType keyType);\n\n        /** Adds mouse release action to the end of this sequence.\n         *\n         * @param button\n         *     The mouse button to release\n         *\n         * @returns\n         *     The referece to this action\n         */\n        ActionSequence& release(MouseButton button);\n\n        /** Adds mouse relative motion action to the end of this sequence.\n         *\n         * @param dx\n         *     Amount of pixels to move on X axis\n         * @param dy\n         *     Amount of pixels to move on Y axis\n         *\n         * @returns\n         *     The referece to this action\n         */\n        ActionSequence& moveMouse(int dx, int dy);\n\n        /** Adds mouse absolute motion action to the end of this sequence.\n         *\n         * @param x\n         *     Where to move on X axis\n         * @param y\n         *     Where to move on Y axis\n         *\n         * @returns\n         *     The referece to this action\n         */\n        ActionSequence& moveMouseTo(int x, int y);\n\n        /** Adds mouse wheel up action to the end of this sequence.\n         *\n         * @returns\n         *     The referece to this action\n         */\n        ActionSequence& wheelUp();\n\n        /** Adds mouse wheel down action to the end of this sequence.\n         *\n         * @returns\n         *     The referece to this action\n         */\n        ActionSequence& wheelDown();\n\n        /** Adds command run action to the end of this sequence.\n         *\n         * @param cmd\n         *     Command to run\n         *\n         * @returns\n         *     The referece to this action\n         */\n        ActionSequence& runCommand(const std::string& cmd);\n\n        /** Adds wait action to the end of this sequence.\n         *\n         * @param milisec\n         *     Time to wait in milisecods\n         *\n         * @returns\n         *     The referece to this action\n         */\n        ActionSequence& wait(unsigned int milisec);\n        \n        /** Performs all actions in sequence. */\n        void send() const;\n\n    private:\n        /** List of actions in this sequence. */\n        ActionList actions_;\n    };\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/actions/commandaction.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"commandaction.hpp\"\n\n#include \"system.hpp\"\n\nnamespace FakeInput\n{\n    CommandRun::CommandRun(const std::string& cmd):\n        cmd_(cmd)\n    {\n    }\n\n    Action* CommandRun::clone() const\n    {\n        return new CommandRun(cmd_);\n    }\n\n    void CommandRun::send() const\n    {\n        System::run(cmd_);\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/actions/commandaction.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_COMMANDACTION_HPP\n#define FI_COMMANDACTION_HPP\n\n#include <string>\n\n#include \"action.hpp\"\n\nnamespace FakeInput\n{\n    /** Represents a command run action.\n     *\n     * Runs a specified command-line command.\n     */\n    class CommandRun : public Action\n    {\n    public:\n        /** CommandRun action constructor.\n         *\n         * @param cmd\n         *     Command to run.\n         */\n        CommandRun(const std::string& cmd);\n\n        /** Creates new copy of this action.\n         *\n         * @returns\n         *     New copy of this action allocated on heap.\n         */\n        Action* clone() const;\n\n        /** Performs this action. */\n        void send() const;\n\n    private:\n        /** Command to run. */\n        std::string cmd_;\n    };\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/actions/keyaction.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"keyaction.hpp\"\n\n#include \"keyboard.hpp\"\n\nnamespace FakeInput\n{\n    KeyAction::KeyAction(KeyActionCallback callback, Key key):\n        callback_(callback),\n        key_(key)\n    {\n    }\n\n    Action* KeyAction::clone() const\n    {\n        return new KeyAction(callback_, key_);\n    }\n\n    void KeyAction::send() const\n    {\n        callback_(key_);\n    }\n\n    KeyboardPress::KeyboardPress(Key key):\n        KeyAction(Keyboard::pressKey, key)\n    {\n    }\n\n    KeyboardPress::KeyboardPress(KeyType keyType):\n        KeyAction(Keyboard::pressKey, Key(keyType))\n    {\n    }\n\n    KeyboardRelease::KeyboardRelease(Key key):\n        KeyAction(Keyboard::releaseKey, key)\n    {\n    }\n\n    KeyboardRelease::KeyboardRelease(KeyType keyType):\n        KeyAction(Keyboard::releaseKey, Key(keyType))\n    {\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/actions/keyaction.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_KEYACTION_HPP\n#define FI_KEYACTION_HPP\n\n#include \"action.hpp\"\n#include \"key.hpp\"\n\nnamespace FakeInput\n{\n    typedef void (*KeyActionCallback) (Key key);\n\n    /** Represent a key action. */\n    class KeyAction : public Action\n    {\n    public:\n        /** KeyAction constructor.\n         *\n         * @param callback\n         *     Callback whitch can handle with key.\n         * @param key\n         *     Key to be handled.\n         */\n        KeyAction(KeyActionCallback callback, Key key);\n\n        /** Creates new copy of this action.\n         *\n         * @returns\n         *     New copy of this action allocated on heap.\n         */\n        Action* clone() const;\n\n        /** Performs this action. */\n        void send() const;\n\n    private:\n        KeyActionCallback callback_;\n        Key key_;\n    };\n\n    /** Represent a key press action. */\n    class KeyboardPress : public KeyAction\n    {\n    public:\n        /** KeyboardPress action constructor.\n         *\n         * @param key\n         *    Key to be pressed.\n         */\n        KeyboardPress(Key key);\n\n        /** KeyboardPress action constructor.\n         *\n         * @param keyType\n         *    Type of key to be pressed.\n         */\n        KeyboardPress(KeyType keyType);\n    };\n\n    /** Represent a key release action. */\n    class KeyboardRelease : public KeyAction\n    {\n    public:\n        /** KeyboardRelease action constructor.\n         *\n         * @param key\n         *    Key to be released.\n         */\n        KeyboardRelease(Key key);\n\n        /** KeyboardRelease action constructor.\n         *\n         * @param keyType\n         *    Type of key to be released.\n         */\n        KeyboardRelease(KeyType keyType);\n    };\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/actions/mouseaction.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"mouseaction.hpp\"\n\nnamespace FakeInput\n{\n    // -------- MOUSE BUTTONS --------- //\n    MouseButtonAction::MouseButtonAction(MouseButtonActionCallback callback, MouseButton button):\n        callback_(callback),\n        button_(button)\n    {\n    }\n\n    Action* MouseButtonAction::clone() const\n    {\n        return new MouseButtonAction(callback_, button_);\n    }\n\n    void MouseButtonAction::send() const\n    {\n        (*callback_)(button_);\n    }\n\n    MousePress::MousePress(MouseButton button):\n        MouseButtonAction(Mouse::pressButton, button)\n    {\n    }\n\n    MouseRelease::MouseRelease(MouseButton button):\n        MouseButtonAction(Mouse::releaseButton, button)\n    {\n    }\n\n    // -------- MOUSE MOTION --------- //\n    MouseMotionAction::MouseMotionAction(MouseMotionActionCallback callback, int x, int y):\n        callback_(callback),\n        x_(x),\n        y_(y)\n    {\n    }\n\n    Action* MouseMotionAction::clone() const\n    {\n        return new MouseMotionAction(callback_, x_, y_);\n    }\n\n    void MouseMotionAction::send() const\n    {\n        callback_(x_, y_);\n    }\n\n    MouseRelativeMotion::MouseRelativeMotion(int dx, int dy):\n        MouseMotionAction(Mouse::move, dx, dy)\n    {\n    }\n\n    MouseAbsoluteMotion::MouseAbsoluteMotion(int x, int y):\n        MouseMotionAction(Mouse::moveTo, x, y)\n    {\n    }\n\n    // -------- MOUSE WHEEL --------- //\n    MouseWheelAction::MouseWheelAction(MouseWheelActionCallback callback):\n        callback_(callback)\n    {\n    }\n\n    Action* MouseWheelAction::clone() const\n    {\n        return new MouseWheelAction(callback_);\n    }\n\n    void MouseWheelAction::send() const\n    {\n        callback_();\n    }\n\n    MouseWheelUp::MouseWheelUp():\n        MouseWheelAction(Mouse::wheelUp)\n    {\n    }\n\n    MouseWheelDown::MouseWheelDown():\n        MouseWheelAction(Mouse::wheelDown)\n    {\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/actions/mouseaction.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_MOUSEACTION_HPP\n#define FI_MOUSEACTION_HPP\n\n#include \"action.hpp\"\n#include \"mouse.hpp\"\n\nnamespace FakeInput\n{\n    // -------- MOUSE BUTTONS --------- //\n    typedef void (*MouseButtonActionCallback)(MouseButton button);\n\n    /** Represents mouse button action. */\n    class MouseButtonAction : public Action\n    {\n    public:\n        /** MouseButtonAction constructor.\n         *\n         * @param callback\n         *     Callback which can handle with mouse button.\n         * @param button\n         *     Mouse button to be handled.\n         */\n        MouseButtonAction(MouseButtonActionCallback callback, MouseButton button);\n\n        /** Creates new copy of this action.\n         *\n         * @returns\n         *     New copy of this action allocated on heap.\n         */\n        Action* clone() const;\n\n        /** Performs this action. */\n        void send() const;\n\n    private:\n        MouseButtonActionCallback callback_;\n        MouseButton button_;\n    };\n\n    /** Represents mouse button press action. */\n    class MousePress : public MouseButtonAction\n    {\n    public:\n        /** MousePress action constructor.\n         *\n         * @param button\n         *     Mouse button to press.\n         */\n        MousePress(MouseButton button);\n    };\n\n    /** Represents mouse button release action. */\n    class MouseRelease : public MouseButtonAction\n    {\n    public:\n        /** MouseRelease action constructor.\n         *\n         * @param button\n         *     Mouse button to release.\n         */\n        MouseRelease(MouseButton button);\n    };\n\n    // -------- MOUSE MOTION --------- //\n    typedef void (*MouseMotionActionCallback)(int x, int y);\n\n    /** Represents mouse motion action. */\n    class MouseMotionAction : public Action\n    {\n    public:\n        /** MouseButtonAction constructor.\n         *\n         * @param callback\n         *     Callback which can move mouse.\n         * @param x\n         *     How to operate on X axis.\n         * @param y\n         *     How to operate on Y axis.\n         */\n        MouseMotionAction(MouseMotionActionCallback callback, int x, int y);\n\n        /** Creates new copy of this action.\n         *\n         * @returns\n         *     New copy of this action allocated on heap.\n         */\n        Action* clone() const;\n\n        /** Performs this action. */\n        void send() const;\n\n    private:\n        MouseMotionActionCallback callback_;\n        int x_;\n        int y_;\n    };\n\n    /** Represents mouse relative motion action.\n     *\n     * Relative means to move mouse some amount of pixels in some direction.\n     */\n    class MouseRelativeMotion : public MouseMotionAction\n    {\n    public:\n        /** MouseAbsoluteMotion constructor.\n         *\n         * @param dx\n         *     Amount of pixels to move on X axis.\n         * @param dy\n         *     Amount of pixels to move on Y axis.\n         */\n        MouseRelativeMotion(int dx, int dy);\n    };\n\n    /** Represents mouse absolute motion action.\n     *\n     * Absolute means to move mouse on the specified position.\n     */\n    class MouseAbsoluteMotion : public MouseMotionAction\n    {\n    public:\n        /** MouseAbsoluteMotion constructor.\n         *\n         * @param x\n         *     Position on X axis to move on.\n         * @param y\n         *     Position on Y axis to move on.\n         */\n        MouseAbsoluteMotion(int x, int y);\n    };\n\n    // -------- MOUSE WHEEL --------- //\n    typedef void (*MouseWheelActionCallback)();\n\n    class MouseWheelAction : public Action\n    {\n    public:\n        /** MouseWheelAction constructor.\n         *\n         * @param callback\n         *     Callback which can handle with mouse wheel.\n         */\n        MouseWheelAction(MouseWheelActionCallback callback);\n\n        /** Creates new copy of this action.\n         *\n         * @returns\n         *     New copy of this action allocated on heap.\n         */\n        Action* clone() const;\n\n        /** Performs this action. */\n        void send() const;\n\n    private:\n        MouseWheelActionCallback callback_;\n    };\n\n    /** Represents mouse wheel up rotation action. */\n    class MouseWheelUp : public MouseWheelAction\n    {\n    public:\n        /** MouseWheelUp action constructor. */\n        MouseWheelUp();\n    };\n\n    /** Represents mouse wheel down rotation action. */\n    class MouseWheelDown : public MouseWheelAction\n    {\n    public:\n        /** MouseWheelDown action constructor. */\n        MouseWheelDown();\n    };\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/actions/waitaction.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"waitaction.hpp\"\n\n#include \"system.hpp\"\n\nnamespace FakeInput\n{\n    Wait::Wait(unsigned int milisec):\n        milisec_(milisec)\n    {\n    }\n\n    Action* Wait::clone() const\n    {\n        return new Wait(milisec_);\n    }\n\n    void Wait::send() const\n    {\n        System::wait(milisec_);\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/actions/waitaction.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_WAITACTION_HPP\n#define FI_WAITACTION_HPP\n\n#include \"action.hpp\"\n\nnamespace FakeInput\n{\n    /** Represents a wait action.\n     *\n     * Waits specified amount of time in miliseconds.\n     */\n    class Wait : public Action\n    {\n    public:\n        /** Wait action constructor.\n         *\n         * @param milisec\n         *     Amount of time to wait in miliseconds.\n         */\n        Wait(unsigned int milisec);\n\n        /** Creates new copy of this action.\n         *\n         * @returns\n         *     New copy of this action allocated on heap.\n         */\n        Action* clone() const;\n\n        /** Performs this action. */\n        void send() const;\n\n    private:\n        unsigned int milisec_;\n    };\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/config.hpp",
    "content": "/**\r\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\r\n *\r\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\r\n * \r\n * Permission is hereby granted, free of charge, to any person obtaining a copy\r\n * of this software and associated documentation files (the \"Software\"), to deal\r\n * in the Software without restriction, including without limitation the rights\r\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\r\n * copies of the Software, and to permit persons to whom the Software is\r\n * furnished to do so, subject to the following conditions:\r\n * \r\n * The above copyright notice and this permission notice shall be included in\r\n * all copies or substantial portions of the Software.\r\n * \r\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\r\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\r\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\r\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\r\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\r\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\r\n * THE SOFTWARE.\r\n */\r\n\r\n#ifndef WC_CONFIG_HPP\r\n#define WC_CONFIG_HPP\r\n\r\n/* #undef TEST_APP */\r\n\r\n#ifndef UNIX\r\n/* #undef UNIX */\r\n#endif\r\n\r\n#ifndef WIN32\r\n    #define WIN32\r\n#endif\r\n\r\n#endif\r\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/config.hpp.cmake",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef WC_CONFIG_HPP\n#define WC_CONFIG_HPP\n\n#cmakedefine TEST_APP\n\n#ifndef UNIX\n    #cmakedefine UNIX\n#endif\n\n#ifndef WIN32\n    #cmakedefine WIN32\n#endif\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/display_unix.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"display_unix.hpp\"\n\nnamespace FakeInput\n{\n    Display* display()\n    {\n        static Display* display = XOpenDisplay(0);\n        return display;\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/display_unix.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_DISPLAY_UNIX_HPP\n#define FI_DISPLAY_UNIX_HPP\n\n#include \"config.hpp\"\n\n#include <X11/Xlib.h>\n\nnamespace FakeInput\n{\n    /** Get connection to the X server\n     *\n     * @warning @image html tux.png\n     *    Unix-like platform only\n     */\n    Display* display();\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/fakeinput.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n// include core\n#include \"keyboard.hpp\"\n#include \"mouse.hpp\"\n#include \"system.hpp\"\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/key.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_KEY_HPP\n#define FI_KEY_HPP\n\n#include \"config.hpp\"\n\n#ifdef UNIX\n    #include \"key_unix.hpp\"\n#elif WIN32\n    #include \"key_win.hpp\"\n#endif\n\nnamespace FakeInput\n{\n#ifdef UNIX\n    /** Class representing a real key.\n     *\n     * @image html tux.png\n     * On Unix-like platform derived from Key_unix\n     */\n    typedef Key_unix Key;\n#elif WIN32\n    /**\n     * @image html windows.png\n     * On Windows derived from Key_win\n     */\n    typedef Key_win Key;\n#endif\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/key_base.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"key_base.hpp\"\n\n#include <stdexcept>\n#include <iostream>\n\nnamespace FakeInput\n{\n    Key_base::Key_base()\n    {\n        code_ = 0;\n        name_ = \"<no key>\";\n    }\n\n    unsigned int Key_base::code() const\n    {\n        return code_;\n    }\n    \n    const std::string& Key_base::name() const\n    {\n        return name_;\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/key_base.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_KEY_BASE_HPP\n#define FI_KEY_BASE_HPP\n\n#include \"config.hpp\"\n\n#include <string>\n\nnamespace FakeInput\n{\n    /** Represents real keyboard button.\n     */\n    class Key_base\n    {\n    public:\n        /** Gives the hardware code of the key.\n         *\n         * @returns\n         *     Device dependend code of the key.\n         */\n        virtual unsigned int code() const;\n\n        /** Gives the name of the key.\n         *\n         * @returns\n         *     The name of the key.\n         */\n        virtual const std::string& name() const;\n\n    protected:\n        /** Key_base constructor.\n         *\n         * Creates key representing no real key.\n         */\n        Key_base();\n\n        unsigned int code_;\n        std::string name_;\n    };\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/key_unix.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"key_unix.hpp\"\n\n#include <stdexcept>\n#include <iostream>\n\n#include \"display_unix.hpp\"\n#include \"mapper.hpp\"\n\nnamespace FakeInput\n{\n    Key_unix::Key_unix()\n    {\n        keysym_ = 0;\n    }\n\n    Key_unix::Key_unix(KeyType type)\n    {\n        if(type == Key_NoKey)\n        {\n            keysym_ = 0;\n            code_ = 0;\n            name_ = \"<no key>\";\n        }\n        else\n        {\n            keysym_ = translateKey(type);\n            code_ = XKeysymToKeycode(display(), keysym_);\n            name_ = XKeysymToString(keysym_);\n        }\n    }\n\n    Key_unix::Key_unix(XEvent* event):\n        Key_base()\n    {\n        if(event->type != KeyPress && event->type != KeyRelease)\n            throw std::logic_error(\"Cannot get key from non-key event\");\n\n        code_ = event->xkey.keycode;\n        keysym_ = XKeycodeToKeysym(event->xkey.display, code_, 0);\n        name_ = XKeysymToString(keysym_);\n    }\n\n    Key_unix::Key_unix(KeySym keysym):\n        Key_base()\n    {\n        code_ = XKeysymToKeycode(display(), keysym);\n        keysym_ = keysym;\n        name_ = XKeysymToString(keysym);\n    }\n\n    KeySym Key_unix::keysym() const\n    {\n        return keysym_;\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/key_unix.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_KEY_UNIX_HPP\n#define FI_KEY_UNIX_HPP\n\n#include \"config.hpp\"\n\n#include <X11/Xlib.h>\n\n#include <string>\n\n#include \"key_base.hpp\"\n#include \"types.hpp\"\n\nnamespace FakeInput\n{\n    /** Represents real keyboard button with appropriate methods on Unix-like platform.\n     *\n     * @warning @image html tux.png\n     *    Unix-like platform only\n     */\n    class Key_unix : public Key_base\n    {\n    public:\n        /** Key constructor.\n         *\n         * Creates key representing no real key.\n         */\n        Key_unix();\n\n        /** Key constructor\n         *\n         * Creates key according to the key type\n         *\n         * @param type\n         *     KeyType to be created\n         *\n         * @note @image html note.png\n         *     There is no quarantee the key type will act as you expected,\n         *     it depends on keyboard layout, language etc., e.g. on Czech\n         *     keyboard the @em Key_2 key will print @em 2 or @em ě according to\n         *     the set keyboard layout (US or Czech)\n         *\n         * @note\n         *     If there is not appropriate key type, you can use platform dependend\n         *     methods Key::fromKeySym (unix) and Key::fromVirtualKey (win)\n         */\n        Key_unix(KeyType type);\n        \n        /** Key contructor\n         *\n         * Creates key from keycode from given X key event.\n         *\n         * @param event\n         *     XKeyEvent (Xlib key event) containing keycode.\n         *\n         * @warning @image html tux.png\n         *    Unix-like platform only\n         */\n        Key_unix(XEvent* event);\n\n        /** Key contructor\n         *\n         * Creates key from the KeySym.\n         *\n         * @param keySym\n         *     The KeySym representing appropriate key.\n         *     KeySyms can be obtained from X11/keysymdef.h\n         *     file in Xlib sources\n         *\n         * @warning @image html tux.png\n         *    Unix-like platform only.\n         */\n        Key_unix(KeySym keySym);\n\n        /** Gives the KeySym representing the key.\n         *\n         * @returns\n         *     KeySym representing the key.\n         *\n         * @warning @image html tux.png\n         *    Unix-like platform only\n         */\n        KeySym keysym() const;\n\n    private:\n        KeySym keysym_;\n    };\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/key_win.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"key_win.hpp\"\n\n#include <stdexcept>\n#include <iostream>\n\n#include \"mapper.hpp\"\n\nnamespace FakeInput\n{\n    Key_win::Key_win()\n    {\n        virtualKey_ = 0;\n    }\n\n    Key_win::Key_win(KeyType type)\n    {\n        if(type == Key_NoKey)\n        {\n            virtualKey_ = 0;\n            code_ = 0;\n            name_ = \"<no key>\";\n        }\n        else\n        {\n            virtualKey_ = (WORD) translateKey(type);\n\n            Key_win tmpKey(virtualKey_);\n            code_ = tmpKey.code_;\n            name_ = tmpKey.name_;\n        }\n    }\n\n    Key_win::Key_win(MSG* message)\n    {\n        switch(message->message)\n        {\n            case WM_KEYDOWN:\n            case WM_KEYUP:\n            case WM_SYSKEYDOWN:\n            case WM_SYSKEYUP:\n                virtualKey_ = message->wParam;\n            break;\n            default:\n                std::logic_error(\"Cannot get key from non-key message\");\n            break;\n        }\n\n        Key_win tmpKey(virtualKey_);\n        code_ = tmpKey.code_;\n        name_ = tmpKey.name_;\n    }\n\n    Key_win::Key_win(WORD virtualKey):\n        virtualKey_(virtualKey)\n    {\n        code_ = MapVirtualKey(virtualKey, MAPVK_VK_TO_VSC);\n        LONG lParam = code_;\n        switch (virtualKey)\n        {\n            case VK_LEFT: case VK_UP: case VK_RIGHT: case VK_DOWN: // arrow keys\n            case VK_PRIOR: case VK_NEXT: // page up and page down\n            case VK_END: case VK_HOME:\n            case VK_INSERT: case VK_DELETE:\n            case VK_DIVIDE: // numpad slash\n            case VK_NUMLOCK:\n            {\n                lParam |= 0x100; // set extended bit\n                break;\n            }\n        }\n\n        char name[128];\n        if(GetKeyNameText(lParam << 16, name, 128))\n        {\n            name_ = std::string(name);\n        }\n        else\n        {\n            std::cerr << \"Cannot get key name\";\n            name_ = \"<unknown key>\";\n        }\n    }\n\n    WORD Key_win::virtualKey() const\n    {\n        return virtualKey_;\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/key_win.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_KEY_WIN_HPP\n#define FI_KEY_WIN_HPP\n\n#include \"config.hpp\"\n\n#include <windows.h>\n\n#include <string>\n\n#include \"key_base.hpp\"\n#include \"types.hpp\"\n\nnamespace FakeInput\n{\n    /** Represents real keyboard button with appropriate methods on Windows.\n     *\n     * @warning @image html windows.png\n     *    Windows platform only.\n     */\n    class Key_win : public Key_base\n    {\n    public:\n        /** Key constructor.\n         *\n         * Creates key representing no real key.\n         */\n        Key_win();\n\n        /** Key constructor\n         *\n         * Creates key according to the key type\n         *\n         * @param type\n         *     KeyType to be created\n         *\n         * @note @image html note.png\n         *     There is no quarantee the key type will act as you expected,\n         *     it depends on keyboard layout, language etc., e.g. on Czech\n         *     keyboard the @em Key_2 key will print @em 2 or @em ě according to\n         *     the set keyboard layout (US or Czech)\n         *\n         * @note\n         *     If there is not appropriate key type, you can use platform dependend\n         *     methods Key::fromKeySym (unix) and Key::fromVirtualKey (win)\n         */\n        Key_win(KeyType type);\n                \n        /** Key constructor\n         *\n         * Creates key from virtual-key code from given Windows key message.\n         *\n         * @param message\n         *     Windows WM_[SYS]KEY(DOWN|UP) message containing virtual-key code.\n         *\n         * @warning @image html windows.png\n         *    Windows platform only\n         */\n        Key_win(MSG* message);\n\n        /** Key constructor\n         *\n         * Creates key from the name of the virtual key.\n         *\n         * @param virtualKey\n         *     The virtual-key code.\n         *     Virtual-key codes can be obtained from winuser.h header file\n         *     or http://msdn.microsoft.com/en-us/library/dd375731 .\n         *\n         * @warning @image html windows.png\n         *    Windows platform only.\n         */\n        Key_win(WORD virtualKey);\n\n        /** Gives the virtual-key code representing the key.\n         *\n         * @returns\n         *     virtual-key code representing the key.\n         *\n         * @warning @image html windows.png\n         *    Windows platform only.\n         */\n        WORD virtualKey() const;\n\n    private:\n        WORD virtualKey_;\n    };\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/keyboard.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"keyboard.hpp\"\n\nnamespace FakeInput\n{\n    void Keyboard::pressKey(Key key)\n    {\n        sendKeyEvent_(key, true);\n    }\n\n    void Keyboard::releaseKey(Key key)\n    {\n        sendKeyEvent_(key, false);\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/keyboard.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_KEYBOARD_HPP\n#define FI_KEYBOARD_HPP\n\n#include \"config.hpp\"\n\n#include \"key.hpp\"\n\nnamespace FakeInput\n{\n    /** Represents keyboard device.\n     *\n     * Allows you to simulate key press.\n     */\n    class Keyboard\n    {\n    public:\n        /** Simulate key press.\n         *\n         * @param key\n         *      Key object representing real key to be pressed.\n         */\n        static void pressKey(Key key);\n\n        /** Simulate key release \n         *\n         * @param key\n         *      Key object representing real key to be released.\n         */\n        static void releaseKey(Key key);\n\n    private:\n        /** Send fake key event to the system.\n         *\n         * @param key\n         *      Key object representing real key to be pressed.\n         * @param isPress\n         *      Whether event is press or release.\n         */\n        static void sendKeyEvent_(Key key, bool isPress);        \n    };\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/keyboard_unix.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"keyboard.hpp\"\n\n#include <X11/Xlib.h>\n#include <X11/extensions/XTest.h>\n\n#include \"display_unix.hpp\"\n\n#include <iostream>\n\nnamespace FakeInput\n{\n    void Keyboard::sendKeyEvent_(Key key, bool isPress)\n    {\n        if(key.keysym() == NoSymbol)\n        {\n            std::cerr << \"Cannot send <no key> event\" << std::endl;\n        }\n        else\n        {\n            XTestFakeKeyEvent(display(), key.code(), isPress, CurrentTime);\n            XFlush(display());\n        }\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/keyboard_win.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"keyboard.hpp\"\n\n#include <windows.h>\n\n#include <iostream>\n\nnamespace FakeInput\n{\n    void Keyboard::sendKeyEvent_(Key key, bool isPress)\n    {\n        if(key.code() == 0)\n        {\n            std::cerr << \"Cannot send <no key> event\" << std::endl;\n        }\n        else\n        {\n            INPUT input;\n            ZeroMemory(&input, sizeof(INPUT));\n            input.type = INPUT_KEYBOARD;\n            input.ki.wVk = key.virtualKey();\n            input.ki.dwFlags = (isPress) ? 0 : KEYEVENTF_KEYUP;\n            SendInput(1, &input, sizeof(INPUT));\n        }\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/mapper.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_MAPPER_HPP\n#define FI_MAPPER_HPP\n\n#include \"mouse.hpp\"\n#include \"key.hpp\"\n\nnamespace FakeInput\n{\n    /** Translates MouseButton to the platform's representation of the button. */\n    long translateMouseButton(MouseButton button);\n\n    /** Translates KeyType to the platform's representation of the key. */\n    unsigned long translateKey(KeyType key);\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/mapper_unix.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"mapper.hpp\"\n\n#include <X11/Xlib.h>\n#include <X11/keysym.h>\n\nnamespace FakeInput\n{\n    static long buttonTable[] = {\n        Button1, // Mouse::LEFT\n        Button2, // Mouse::MIDDLE\n        Button3 // Mouse::RIGHT\n    };\n\n    static long keyTable[] = {\n        XK_A, XK_B, XK_C, XK_D, XK_E, XK_F, XK_G, XK_H, XK_I,\n        XK_J, XK_K, XK_L, XK_M, XK_N, XK_O, XK_P, XK_Q, XK_R,\n        XK_S, XK_T, XK_U, XK_V, XK_W, XK_X, XK_Y, XK_Z, \n\n        XK_0, XK_1, XK_2, XK_3, XK_4, XK_5, XK_6, XK_7, XK_8, XK_9, \n\n        XK_F1, XK_F2, XK_F3, XK_F4, XK_F5, XK_F6, XK_F7, XK_F8, XK_F9, XK_F10, XK_F11, XK_F12,\n        XK_F13, XK_F14, XK_F15, XK_F16, XK_F17, XK_F18, XK_F19, XK_F20, XK_F21, XK_F22, XK_F23, XK_F24,\n        \n\t\tXK_Escape,\n        XK_space,\n\t\tXK_Return,\n        XK_BackSpace,\n\t\tXK_Tab,\n\n\t\tXK_Shift_L,\n\t\tXK_Shift_R,\n\t\tXK_Control_L,\n\t\tXK_Control_R,\n\t\tXK_Alt_L,\n\t\tXK_Alt_R,\n\t\tXK_Super_L,\n\t\tXK_Super_R,\n\t\tXK_Menu,\n\n\t\tXK_Caps_Lock,\n\t\tXK_Num_Lock,\n\t\tXK_Scroll_Lock,\n\n\t\tXK_Print,\n\t\tXK_Pause,\n\n\t\tXK_Insert,\n\t\tXK_Delete,\n\t\tXK_Prior,\n\t\tXK_Next,\n\t\tXK_Home,\n\t\tXK_End,\n\n\t\tXK_Left,\n\t\tXK_Up,\n\t\tXK_Right,\n\t\tXK_Down,\n\n\t\tXK_KP_0,\n\t\tXK_KP_1,\n\t\tXK_KP_2,\n\t\tXK_KP_3,\n\t\tXK_KP_4,\n\t\tXK_KP_5,\n\t\tXK_KP_6,\n\t\tXK_KP_7,\n\t\tXK_KP_8,\n\t\tXK_KP_9,\n\n\t\tXK_KP_Add,\n\t\tXK_KP_Subtract,\n\t\tXK_KP_Multiply,\n\t\tXK_KP_Divide,\n\t\tXK_KP_Decimal,\n        XK_KP_Enter\n    };\n\n    long translateMouseButton(MouseButton button)\n    {\n        return buttonTable[button];\n    }\n    \n    unsigned long translateKey(KeyType key)\n    {\n        return keyTable[key];\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/mapper_win.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"mapper.hpp\"\n\n#include <windows.h>\n\nnamespace FakeInput\n{\n    static long buttonTable[] = {\n        MOUSEEVENTF_LEFTDOWN, // Mouse::LEFT\n        MOUSEEVENTF_MIDDLEDOWN, // Mouse::MIDDLE\n        MOUSEEVENTF_RIGHTDOWN // Mouse::RIGHT\n    };\n\n    static long keyTable[] = {\n        0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, // A - I\n        0x4A, 0x4B, 0x4C, 0x4D, 0x4E, 0x4F, 0x50, 0x51, 0x52, // J - R\n        0x53, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, 0x5A, // S - Z\n\n        0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, // 0 - 9\n\n        VK_F1, VK_F2, VK_F3, VK_F4, VK_F5, VK_F6, VK_F7, VK_F8, VK_F9, VK_F10, VK_F11, VK_F12,\n        VK_F13, VK_F14, VK_F15, VK_F16, VK_F17, VK_F18, VK_F19, VK_F20, VK_F21, VK_F22, VK_F23, VK_F24,\n\n        VK_ESCAPE,\n        VK_SPACE,\n        VK_RETURN,\n        VK_BACK,\n        VK_TAB,\n\n        VK_LSHIFT,\n        VK_RSHIFT,\n        VK_LCONTROL,\n        VK_RCONTROL,\n        VK_LMENU,\n        VK_RMENU,\n        VK_LWIN,\n        VK_RWIN,\n        VK_APPS,\n\n        VK_CAPITAL,\n        VK_NUMLOCK,\n        VK_SCROLL,\n\n        VK_SNAPSHOT,\n        VK_PAUSE,\n\n        VK_INSERT,\n        VK_DELETE,\n        VK_PRIOR,\n        VK_NEXT,\n        VK_HOME,\n        VK_END,\n\n        VK_LEFT,\n        VK_UP,\n        VK_RIGHT,\n        VK_DOWN,\n\n        VK_NUMPAD0,\n        VK_NUMPAD1,\n        VK_NUMPAD2,\n        VK_NUMPAD3,\n        VK_NUMPAD4,\n        VK_NUMPAD5,\n        VK_NUMPAD6,\n        VK_NUMPAD7,\n        VK_NUMPAD8,\n        VK_NUMPAD9,\n\n        VK_ADD,\n        VK_SUBTRACT,\n        VK_MULTIPLY,\n        VK_DIVIDE,\n        VK_DECIMAL,\n        VK_RETURN\n    };\n\n    long translateMouseButton(MouseButton button)\n    {\n        return buttonTable[button];\n    }\n    \n    unsigned long translateKey(KeyType key)\n    {\n        return keyTable[key];\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/mouse.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_MOUSE_HPP\n#define FI_MOUSE_HPP\n\n#include \"config.hpp\"\n#include \"types.hpp\"\n\nnamespace FakeInput\n{\n    /** Represents mouse device.\n     *\n     * Allows you to move the cursor to the required position.\n     * Or simulate mouse button press.\n     */\n    class Mouse\n    {\n    public:\n        /** Moves mouse cursor in direction.\n         *\n         * @param xDirection\n         *     Direction in pixels on X-axis.\n         * @param yDirection\n         *     Direction in pixels on Y-axis.\n         */\n        static void move(int xDirection, int yDirection);\n\n        /** Moves mouse cursor to the specified position.\n         *\n         * @param x\n         *     X coordinate of the position.\n         * @param y\n         *     Y coordinate of the position.\n         */\n        static void moveTo(int x, int y);\n\n        /** Simulates mouse button press.\n         *\n         * @param button\n         *     MouseButton to be pressed\n         */\n        static void pressButton(MouseButton button);\n\n        /** Simulates mouse button release.\n         *\n         * @param button\n         *     MouseButton to be released\n         */\n        static void releaseButton(MouseButton button);\n\n        /** Simulates wheel up move */\n        static void wheelUp();\n\n        /** Simulates wheel up down */\n        static void wheelDown();\n    };\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/mouse_unix.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"mouse.hpp\"\n\n#include <X11/Xlib.h>\n#include <X11/extensions/XTest.h>\n\n#include \"display_unix.hpp\"\n\n#include \"mapper.hpp\"\n\nnamespace FakeInput\n{\n    void Mouse::move(int x, int y)\n    {\n        XTestFakeRelativeMotionEvent(display(), x, y, CurrentTime);\n        XFlush(display());\n    }\n\n    void Mouse::moveTo(int x, int y)\n    {\n        XTestFakeMotionEvent(display(), -1, x, y, CurrentTime);\n        XFlush(display());\n    }\n\n    void Mouse::pressButton(MouseButton button)\n    {\n        int xButton = translateMouseButton(button);\n        XTestFakeButtonEvent(display(), xButton, true, CurrentTime);\n        XFlush(display());\n    }\n\n    void Mouse::releaseButton(MouseButton button)\n    {\n        int xButton = translateMouseButton(button);\n        XTestFakeButtonEvent(display(), xButton, false, CurrentTime);\n        XFlush(display());\n    }\n\n    void Mouse::wheelUp()\n    {\n        XTestFakeButtonEvent(display(), 4, true, CurrentTime);\n        XSync(display(), false);\n        XTestFakeButtonEvent(display(), 4, false, CurrentTime);\n        XFlush(display());\n    }\n\n    void Mouse::wheelDown()\n    {\n        XTestFakeButtonEvent(display(), 5, true, CurrentTime);\n        XSync(display(), false);\n        XTestFakeButtonEvent(display(), 5, false, CurrentTime);\n        XFlush(display());\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/mouse_win.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"mouse.hpp\"\n\n#include <windows.h>\n\n#include \"mapper.hpp\"\n#include \"types.hpp\"\n\n#define INIT_INPUT(var) \\\n    INPUT var; \\\n    ZeroMemory(&var, sizeof(INPUT)); \\\n    var.type = INPUT_MOUSE;\n\nnamespace FakeInput\n{\n    void Mouse::move(int x, int y)\n    {\n        INIT_INPUT(input);\n        input.mi.dx = x;\n        input.mi.dy = y;\n        input.mi.dwFlags = MOUSEEVENTF_MOVE;\n        SendInput(1, &input, sizeof(INPUT));\n    }\n\n    void Mouse::moveTo(int x, int y)\n    {\n        double screenWidth = GetSystemMetrics(SM_CXSCREEN) - 1; \n        double screenHeight = GetSystemMetrics(SM_CYSCREEN) - 1; \n        double fx = x * (65535.0f / screenWidth);\n        double fy = y * (65535.0f / screenHeight);\n\n        INIT_INPUT(input);\n        input.mi.dx = (LONG) fx;\n        input.mi.dy = (LONG) fy;\n        input.mi.dwFlags = MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_MOVE;\n        SendInput(1, &input, sizeof(INPUT));\n    }\n\n    void Mouse::pressButton(MouseButton button)\n    {\n        INIT_INPUT(input);\n        input.mi.dwFlags = translateMouseButton(button);\n\n        SendInput(1, &input, sizeof(INPUT));\n    }\n\n    void Mouse::releaseButton(MouseButton button)\n    {\n        INIT_INPUT(input);\n        input.mi.dwFlags = translateMouseButton(button) << 1;\n\n        SendInput(1, &input, sizeof(INPUT));\n    }\n\n    void Mouse::wheelUp()\n    {\n        INIT_INPUT(input);\n        input.mi.dwFlags = MOUSEEVENTF_WHEEL;\n        input.mi.mouseData = WHEEL_DELTA;\n\n        SendInput(1, &input, sizeof(INPUT));\n    }\n\n    void Mouse::wheelDown()\n    {\n        INIT_INPUT(input);\n        input.mi.dwFlags = MOUSEEVENTF_WHEEL;\n        input.mi.mouseData = -WHEEL_DELTA;\n\n        SendInput(1, &input, sizeof(INPUT));\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/system.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_SYSTEM_HPP\n#define FI_SYSTEM_HPP\n\n#include \"config.hpp\"\n\n#include <string>\n\nnamespace FakeInput\n{\n    /** Handler of some system operations.\n     *\n     * Allows you to run command-line commands\n     * and wait for specified time\n     */\n    class System\n    {\n    public:\n        /** Executes command-line command.\n         *\n         * @param cmd\n         *     %Command to run.\n         */\n        static void run(const std::string& cmd);\n\n        /** Sleeps the current thread and wait for specified time.\n         *\n         * @param milisec\n         *     time to wait in miliseconds\n         */\n        static void wait(unsigned int milisec);\n    };\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/system_unix.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"system.hpp\"\n\n#include <stdlib.h>\n#include <time.h>\n\nnamespace FakeInput\n{\n    void System::run(const std::string& cmd)\n    {\n        std::string command = cmd + \" &\";\n\n        system(command.c_str());\n    }\n\n    void System::wait(unsigned int milisec)\n    {\n        timespec tm;\n        tm.tv_sec = milisec / 1000;\n        tm.tv_nsec = (milisec % 1000) * 1000000;\n\n        nanosleep(&tm, NULL);\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/system_win.cpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"system.hpp\"\n\n#include <stdlib.h>\n#include <windows.h>\n\nnamespace FakeInput\n{\n    void System::run(const std::string& cmd)\n    {\n        std::string command = \"start \" + cmd;\n\n        system(command.c_str());\n    }\n\n    void System::wait(unsigned int milisec)\n    {\n        Sleep(milisec);\n    }\n}\n"
  },
  {
    "path": "ThirdParty/FakeInput/src/types.hpp",
    "content": "/**\n * This file is part of the FakeInput library (https://github.com/uiii/FakeInput)\n *\n * Copyright (C) 2011 by Richard Jedlicka <uiii.dev@gmail.com>\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#ifndef FI_TYPES_HPP\n#define FI_TYPES_HPP\n\nnamespace FakeInput\n{\n    /** Mouse button which can be pressed or released */\n    enum MouseButton {\n        Mouse_Left,\n        Mouse_Middle,\n        Mouse_Right\n    };\n\n    /** Predefined common US keyboard layout key types */\n    enum KeyType {\n        Key_NoKey = -1,\n\n        Key_A, Key_B, Key_C, Key_D, Key_E, Key_F,\n        Key_G, Key_H, Key_I, Key_J, Key_K, Key_L,\n        Key_M, Key_N, Key_O, Key_P, Key_Q, Key_R,\n        Key_S, Key_T, Key_U, Key_V, Key_W, Key_X,\n        Key_Y, Key_Z, \n\n        Key_0, Key_1, Key_2, Key_3, Key_4, Key_5, Key_6, Key_7, Key_8, Key_9,\n\n        Key_F1, Key_F2, Key_F3, Key_F4, Key_F5, Key_F6, Key_F7, Key_F8,\n        Key_F9, Key_F10, Key_F11, Key_F12, Key_F13, Key_F14, Key_F15, Key_F16,\n        Key_F17, Key_F18, Key_F19, Key_F20, Key_F21, Key_F22, Key_F23, Key_F24,\n\n        Key_Escape,\n        Key_Space,\n        Key_Return,\n        Key_Backspace,\n        Key_Tab,\n        \n        Key_Shift_L,\n        Key_Shift_R,\n        Key_Control_L,\n        Key_Control_R,\n        Key_Alt_L,\n        Key_Alt_R,\n        Key_Win_L,\n        Key_Win_R,\n        Key_Apps,\n\n        Key_CapsLock,\n        Key_NumLock,\n        Key_ScrollLock,\n\n        Key_PrintScreen,\n        Key_Pause,\n\n        Key_Insert,\n        Key_Delete,\n        Key_PageUP,\n        Key_PageDown,\n        Key_Home,\n        Key_End,\n\n        Key_Left,\n        Key_Right,\n        Key_Up,\n        Key_Down,\n\n        Key_Numpad0,\n        Key_Numpad1,\n        Key_Numpad2,\n        Key_Numpad3,\n        Key_Numpad4,\n        Key_Numpad5,\n        Key_Numpad6,\n        Key_Numpad7,\n        Key_Numpad8,\n        Key_Numpad9,\n\n        Key_NumpadAdd,\n        Key_NumpadSubtract,\n        Key_NumpadMultiply,\n        Key_NumpadDivide,\n        Key_NumpadDecimal,\n        Key_NumpadEnter\n    };\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/ViGEm/Include/ViGEmBusDriver.h",
    "content": "/*\nMIT License\n\nCopyright (c) 2016 Benjamin \"Nefarius\" Hglinger\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n*/\n\n\n// {A77BC4D5-6AF7-4E69-8DC4-6B88A6028CE6}\n// ReSharper disable once CppMissingIncludeGuard\nDEFINE_GUID(GUID_VIGEM_INTERFACE_PDO,\n    0xA77BC4D5, 0x6AF7, 0x4E69, 0x8D, 0xC4, 0x6B, 0x88, 0xA6, 0x02, 0x8C, 0xE6);\n\n// {A8BA2D1F-894F-464A-B0CE-7A0C8FD65DF1}\nDEFINE_GUID(GUID_DEVCLASS_VIGEM_RAWPDO,\n    0xA8BA2D1F, 0x894F, 0x464A, 0xB0, 0xCE, 0x7A, 0x0C, 0x8F, 0xD6, 0x5D, 0xF1);\n\n#pragma once\n\n\n//\n// Describes the current stage a PDO completed\n// \ntypedef enum _VIGEM_PDO_STAGE\n{\n    ViGEmPdoCreate,\n    ViGEmPdoPrepareHardware,\n    ViGEmPdoInternalIoControl\n\n} VIGEM_PDO_STAGE, *PVIGEM_PDO_STAGE;\n\n//\n// PDO stage result callback definition\n// \ntypedef\nVOID\n(*PVIGEM_BUS_PDO_STAGE_RESULT)(\n    _In_ PINTERFACE InterfaceHeader,\n    _In_ VIGEM_PDO_STAGE Stage,\n    _In_ ULONG Serial,\n    _In_ NTSTATUS Status\n    );\n\ntypedef struct _VIGEM_BUS_INTERFACE {\n    // \n    // Standard interface header, must be present\n    // \n    INTERFACE InterfaceHeader;\n\n    //\n    // PDO stage result callback\n    // \n    PVIGEM_BUS_PDO_STAGE_RESULT BusPdoStageResult;\n\n} VIGEM_BUS_INTERFACE, *PVIGEM_BUS_INTERFACE;\n\n#define VIGEM_BUS_INTERFACE_VERSION      1\n\nVOID FORCEINLINE BUS_PDO_REPORT_STAGE_RESULT(\n    VIGEM_BUS_INTERFACE Interface, \n    VIGEM_PDO_STAGE Stage,\n    ULONG Serial,\n    NTSTATUS Status\n)\n{\n    (*Interface.BusPdoStageResult)(&Interface.InterfaceHeader, Stage, Serial, Status);\n}\n\n"
  },
  {
    "path": "ThirdParty/ViGEm/Include/ViGEmBusShared.h",
    "content": "/*\nMIT License\n\nCopyright (c) 2016 Benjamin \"Nefarius\" Hglinger\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n*/\n\n\n// {96E42B22-F5E9-42F8-B043-ED0F932F014F}\n// ReSharper disable once CppMissingIncludeGuard\nDEFINE_GUID(GUID_DEVINTERFACE_BUSENUM_VIGEM,\n    0x96E42B22, 0xF5E9, 0x42F8, 0xB0, 0x43, 0xED, 0x0F, 0x93, 0x2F, 0x01, 0x4F);\n\n#pragma once\n\n#include \"ViGEmCommon.h\"\n\n//\n// Common version for user-mode library and driver compatibility\n// \n// On initialization, the user-mode library has this number embedded\n// and sends it to the bus on its enumeration. The bus compares this\n// number to the one it was compiled with. If they match, the bus\n// access is permitted and success reported. If they mismatch, an\n// error is reported and the user-mode library skips this instance.\n// \n#define VIGEM_COMMON_VERSION            0x0001\n\n#define FILE_DEVICE_BUSENUM             FILE_DEVICE_BUS_EXTENDER\n#define BUSENUM_IOCTL(_index_)          CTL_CODE(FILE_DEVICE_BUSENUM, _index_, METHOD_BUFFERED, FILE_READ_DATA)\n#define BUSENUM_W_IOCTL(_index_)        CTL_CODE(FILE_DEVICE_BUSENUM, _index_, METHOD_BUFFERED, FILE_WRITE_DATA)\n#define BUSENUM_R_IOCTL(_index_)        CTL_CODE(FILE_DEVICE_BUSENUM, _index_, METHOD_BUFFERED, FILE_READ_DATA)\n#define BUSENUM_RW_IOCTL(_index_)       CTL_CODE(FILE_DEVICE_BUSENUM, _index_, METHOD_BUFFERED, FILE_WRITE_DATA | FILE_READ_DATA)\n\n#define IOCTL_VIGEM_BASE 0x801\n\n// \n// IO control codes\n// \n#define IOCTL_VIGEM_PLUGIN_TARGET       BUSENUM_W_IOCTL (IOCTL_VIGEM_BASE + 0x000)\n#define IOCTL_VIGEM_UNPLUG_TARGET       BUSENUM_W_IOCTL (IOCTL_VIGEM_BASE + 0x001)\n#define IOCTL_VIGEM_CHECK_VERSION       BUSENUM_W_IOCTL (IOCTL_VIGEM_BASE + 0x002)\n\n#define IOCTL_XUSB_REQUEST_NOTIFICATION BUSENUM_RW_IOCTL(IOCTL_VIGEM_BASE + 0x200)\n#define IOCTL_XUSB_SUBMIT_REPORT        BUSENUM_W_IOCTL (IOCTL_VIGEM_BASE + 0x201)\n#define IOCTL_DS4_SUBMIT_REPORT         BUSENUM_W_IOCTL (IOCTL_VIGEM_BASE + 0x202)\n#define IOCTL_DS4_REQUEST_NOTIFICATION  BUSENUM_W_IOCTL (IOCTL_VIGEM_BASE + 0x203)\n#define IOCTL_XGIP_SUBMIT_REPORT        BUSENUM_W_IOCTL (IOCTL_VIGEM_BASE + 0x204)\n#define IOCTL_XGIP_SUBMIT_INTERRUPT     BUSENUM_W_IOCTL (IOCTL_VIGEM_BASE + 0x205)\n\n\n//\n//  Data structure used in PlugIn and UnPlug ioctls\n//\n\n#pragma region Plugin\n\n//\n// Data structure used in IOCTL_VIGEM_PLUGIN_TARGET requests.\n// \ntypedef struct _VIGEM_PLUGIN_TARGET\n{\n    //\n    // sizeof (struct _BUSENUM_HARDWARE)\n    //\n    IN ULONG Size;\n\n    //\n    // Serial number of target device.\n    // \n    IN ULONG SerialNo;\n\n    // \n    // Type of the target device to emulate.\n    // \n    VIGEM_TARGET_TYPE TargetType;\n\n    //\n    // If set, the vendor ID the emulated device is reporting\n    // \n    USHORT VendorId;\n\n    //\n    // If set, the product ID the emulated device is reporting\n    // \n    USHORT ProductId;\n\n} VIGEM_PLUGIN_TARGET, *PVIGEM_PLUGIN_TARGET;\n\n//\n// Initializes a VIGEM_PLUGIN_TARGET structure.\n// \nVOID FORCEINLINE VIGEM_PLUGIN_TARGET_INIT(\n    _Out_ PVIGEM_PLUGIN_TARGET PlugIn,\n          _In_ ULONG SerialNo,\n          _In_ VIGEM_TARGET_TYPE TargetType\n)\n{\n    RtlZeroMemory(PlugIn, sizeof(VIGEM_PLUGIN_TARGET));\n\n    PlugIn->Size = sizeof(VIGEM_PLUGIN_TARGET);\n    PlugIn->SerialNo = SerialNo;\n    PlugIn->TargetType = TargetType;\n}\n\n#pragma endregion \n\n#pragma region Unplug\n\n//\n// Data structure used in IOCTL_VIGEM_UNPLUG_TARGET requests.\n// \ntypedef struct _VIGEM_UNPLUG_TARGET\n{\n    //\n    // sizeof (struct _REMOVE_HARDWARE)\n    //\n    IN ULONG Size;\n\n    //\n    // Serial number of target device.\n    // \n    ULONG SerialNo;\n\n} VIGEM_UNPLUG_TARGET, *PVIGEM_UNPLUG_TARGET;\n\n//\n// Initializes a VIGEM_UNPLUG_TARGET structure.\n// \nVOID FORCEINLINE VIGEM_UNPLUG_TARGET_INIT(\n    _Out_ PVIGEM_UNPLUG_TARGET UnPlug,\n          _In_ ULONG SerialNo\n)\n{\n    RtlZeroMemory(UnPlug, sizeof(VIGEM_UNPLUG_TARGET));\n\n    UnPlug->Size = sizeof(VIGEM_UNPLUG_TARGET);\n    UnPlug->SerialNo = SerialNo;\n}\n\n#pragma endregion\n\n#pragma region Check version\n\ntypedef struct _VIGEM_CHECK_VERSION\n{\n    IN ULONG Size;\n\n    IN ULONG Version;\n\n} VIGEM_CHECK_VERSION, *PVIGEM_CHECK_VERSION;\n\nVOID FORCEINLINE VIGEM_CHECK_VERSION_INIT(\n    _Out_ PVIGEM_CHECK_VERSION CheckVersion,\n    _In_ ULONG Version\n)\n{\n    RtlZeroMemory(CheckVersion, sizeof(VIGEM_CHECK_VERSION));\n\n    CheckVersion->Size = sizeof(VIGEM_CHECK_VERSION);\n    CheckVersion->Version = Version;\n}\n\n#pragma endregion \n\n#pragma region XUSB (aka Xbox 360 device) section\n\n//\n// Data structure used in IOCTL_XUSB_REQUEST_NOTIFICATION requests.\n// \ntypedef struct _XUSB_REQUEST_NOTIFICATION\n{\n    //\n    // sizeof(struct _XUSB_REQUEST_NOTIFICATION)\n    // \n    ULONG Size;\n\n    //\n    // Serial number of target device.\n    // \n    ULONG SerialNo;\n\n    //\n    // Vibration intensity value of the large motor (0-255).\n    // \n    UCHAR LargeMotor;\n\n    //\n    // Vibration intensity value of the small motor (0-255).\n    // \n    UCHAR SmallMotor;\n\n    //\n    // Index number of the slot/LED that XUSB.sys has assigned.\n    // \n    UCHAR LedNumber;\n\n} XUSB_REQUEST_NOTIFICATION, *PXUSB_REQUEST_NOTIFICATION;\n\n//\n// Initializes a XUSB_REQUEST_NOTIFICATION structure.\n// \nVOID FORCEINLINE XUSB_REQUEST_NOTIFICATION_INIT(\n    _Out_ PXUSB_REQUEST_NOTIFICATION Request,\n          _In_ ULONG SerialNo\n)\n{\n    RtlZeroMemory(Request, sizeof(XUSB_REQUEST_NOTIFICATION));\n\n    Request->Size = sizeof(XUSB_REQUEST_NOTIFICATION);\n    Request->SerialNo = SerialNo;\n}\n\n//\n// Data structure used in IOCTL_XUSB_SUBMIT_REPORT requests.\n// \ntypedef struct _XUSB_SUBMIT_REPORT\n{\n    //\n    // sizeof(struct _XUSB_SUBMIT_REPORT)\n    // \n    ULONG Size;\n\n    //\n    // Serial number of target device.\n    // \n    ULONG SerialNo;\n\n    //\n    // Report to submit to the target device.\n    // \n    XUSB_REPORT Report;\n\n} XUSB_SUBMIT_REPORT, *PXUSB_SUBMIT_REPORT;\n\n//\n// Initializes an XUSB report.\n// \nVOID FORCEINLINE XUSB_SUBMIT_REPORT_INIT(\n    _Out_ PXUSB_SUBMIT_REPORT Report,\n    _In_ ULONG SerialNo\n)\n{\n    RtlZeroMemory(Report, sizeof(XUSB_SUBMIT_REPORT));\n\n    Report->Size = sizeof(XUSB_SUBMIT_REPORT);\n    Report->SerialNo = SerialNo;\n}\n\n#pragma endregion\n\n#pragma region DualShock 4 section\n\ntypedef struct _DS4_OUTPUT_REPORT\n{\n    //\n    // Vibration intensity value of the small motor (0-255).\n    // \n    UCHAR SmallMotor;\n\n    //\n    // Vibration intensity value of the large motor (0-255).\n    // \n    UCHAR LargeMotor;\n\n    //\n    // Color values of the Lightbar.\n    //\n    DS4_LIGHTBAR_COLOR LightbarColor;\n\n} DS4_OUTPUT_REPORT, *PDS4_OUTPUT_REPORT;\n\n//\n// Data structure used in IOCTL_DS4_REQUEST_NOTIFICATION requests.\n// \ntypedef struct _DS4_REQUEST_NOTIFICATION\n{\n    //\n    // sizeof(struct _XUSB_REQUEST_NOTIFICATION)\n    // \n    ULONG Size;\n\n    //\n    // Serial number of target device.\n    // \n    ULONG SerialNo;\n\n    //\n    // The HID output report\n    // \n    DS4_OUTPUT_REPORT Report;\n\n} DS4_REQUEST_NOTIFICATION, *PDS4_REQUEST_NOTIFICATION;\n\n//\n// Initializes a DS4_REQUEST_NOTIFICATION structure.\n// \nVOID FORCEINLINE DS4_REQUEST_NOTIFICATION_INIT(\n    _Out_ PDS4_REQUEST_NOTIFICATION Request,\n    _In_ ULONG SerialNo\n)\n{\n    RtlZeroMemory(Request, sizeof(DS4_REQUEST_NOTIFICATION));\n\n    Request->Size = sizeof(DS4_REQUEST_NOTIFICATION);\n    Request->SerialNo = SerialNo;\n}\n\n//\n// DualShock 4 request data\n// \ntypedef struct _DS4_SUBMIT_REPORT\n{\n    //\n    // sizeof(struct _DS4_SUBMIT_REPORT)\n    // \n    ULONG Size;\n\n    //\n    // Serial number of target device.\n    // \n    ULONG SerialNo;\n\n    //\n    // HID Input report\n    // \n    DS4_REPORT Report;\n\n} DS4_SUBMIT_REPORT, *PDS4_SUBMIT_REPORT;\n\n//\n// Initializes a DualShock 4 report.\n// \nVOID FORCEINLINE DS4_SUBMIT_REPORT_INIT(\n    _Out_ PDS4_SUBMIT_REPORT Report,\n    _In_ ULONG SerialNo\n)\n{\n    RtlZeroMemory(Report, sizeof(DS4_SUBMIT_REPORT));\n\n    Report->Size = sizeof(DS4_SUBMIT_REPORT);\n    Report->SerialNo = SerialNo;\n\n    DS4_REPORT_INIT(&Report->Report);\n}\n\n#pragma endregion\n\n#pragma region XGIP (aka Xbox One device) section - EXPERIMENTAL\n\ntypedef struct _XGIP_REPORT\n{\n    UCHAR Buttons1;\n    UCHAR Buttons2;\n    SHORT LeftTrigger;\n    SHORT RightTrigger;\n    SHORT ThumbLX;\n    SHORT ThumbLY;\n    SHORT ThumbRX;\n    SHORT ThumbRY;\n\n} XGIP_REPORT, *PXGIP_REPORT;\n\n//\n// Xbox One request data\n// \ntypedef struct _XGIP_SUBMIT_REPORT\n{\n    //\n    // sizeof(struct _XGIP_SUBMIT_REPORT)\n    // \n    ULONG Size;\n\n    //\n    // Serial number of target device.\n    // \n    ULONG SerialNo;\n\n    //\n    // HID Input report\n    // \n    XGIP_REPORT Report;\n\n} XGIP_SUBMIT_REPORT, *PXGIP_SUBMIT_REPORT;\n\n//\n// Initializes an Xbox One report.\n// \nVOID FORCEINLINE XGIP_SUBMIT_REPORT_INIT(\n    _Out_ PXGIP_SUBMIT_REPORT Report,\n    _In_ ULONG SerialNo\n)\n{\n    RtlZeroMemory(Report, sizeof(XGIP_SUBMIT_REPORT));\n\n    Report->Size = sizeof(XGIP_SUBMIT_REPORT);\n    Report->SerialNo = SerialNo;\n}\n\n//\n// Xbox One interrupt data\n// \ntypedef struct _XGIP_SUBMIT_INTERRUPT\n{\n    //\n    // sizeof(struct _XGIP_SUBMIT_INTERRUPT)\n    // \n    ULONG Size;\n\n    //\n    // Serial number of target device.\n    // \n    ULONG SerialNo;\n\n    //\n    // Interrupt buffer.\n    // \n    UCHAR Interrupt[64];\n\n    //\n    // Length of interrupt buffer.\n    // \n    ULONG InterruptLength;\n\n} XGIP_SUBMIT_INTERRUPT, *PXGIP_SUBMIT_INTERRUPT;\n\n//\n// Initializes an Xbox One interrupt.\n// \nVOID FORCEINLINE XGIP_SUBMIT_INTERRUPT_INIT(\n    _Out_ PXGIP_SUBMIT_INTERRUPT Report,\n    _In_ ULONG SerialNo\n)\n{\n    RtlZeroMemory(Report, sizeof(XGIP_SUBMIT_INTERRUPT));\n\n    Report->Size = sizeof(XGIP_SUBMIT_INTERRUPT);\n    Report->SerialNo = SerialNo;\n}\n\n#pragma endregion\n\n"
  },
  {
    "path": "ThirdParty/ViGEm/Include/ViGEmClient.h",
    "content": "/*\nMIT License\n\nCopyright (c) 2017 Benjamin \"Nefarius\" Hglinger\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n*/\n\n\n#ifndef ViGEmClient_h__\n#define ViGEmClient_h__\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include \"ViGEmCommon.h\"\n\n#ifdef VIGEM_DYNAMIC\n#ifdef VIGEM_EXPORTS\n#define VIGEM_API __declspec(dllexport)\n#else\n#define VIGEM_API __declspec(dllimport)\n#endif\n#else\n#define VIGEM_API\n#endif\n\n/**\n * \\typedef enum _VIGEM_ERRORS\n *\n * \\brief   Defines an alias representing the ViGEm errors.\n */\ntypedef enum _VIGEM_ERRORS\n{\n    VIGEM_ERROR_NONE = 0x20000000,\n    VIGEM_ERROR_BUS_NOT_FOUND = 0xE0000001,\n    VIGEM_ERROR_NO_FREE_SLOT = 0xE0000002,\n    VIGEM_ERROR_INVALID_TARGET = 0xE0000003,\n    VIGEM_ERROR_REMOVAL_FAILED = 0xE0000004,\n    VIGEM_ERROR_ALREADY_CONNECTED = 0xE0000005,\n    VIGEM_ERROR_TARGET_UNINITIALIZED = 0xE0000006,\n    VIGEM_ERROR_TARGET_NOT_PLUGGED_IN = 0xE0000007,\n    VIGEM_ERROR_BUS_VERSION_MISMATCH = 0xE0000008,\n    VIGEM_ERROR_BUS_ACCESS_FAILED = 0xE0000009,\n    VIGEM_ERROR_CALLBACK_ALREADY_REGISTERED = 0xE0000010,\n    VIGEM_ERROR_CALLBACK_NOT_FOUND = 0xE0000011,\n    VIGEM_ERROR_BUS_ALREADY_CONNECTED = 0xE0000012\n} VIGEM_ERROR;\n\n/**\n * \\def VIGEM_SUCCESS(_val_) (_val_ == VIGEM_ERROR_NONE);\n *\n * \\brief   A macro that defines success.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   _val_   The VIGEM_ERROR value.\n */\n#define VIGEM_SUCCESS(_val_) (_val_ == VIGEM_ERROR_NONE)\n\n/**\n * \\typedef struct _VIGEM_CLIENT_T *PVIGEM_CLIENT\n *\n * \\brief   Defines an alias representing a driver connection object.\n */\ntypedef struct _VIGEM_CLIENT_T *PVIGEM_CLIENT;\n\n/**\n * \\typedef struct _VIGEM_TARGET_T *PVIGEM_TARGET\n *\n * \\brief   Defines an alias representing a target device object.\n */\ntypedef struct _VIGEM_TARGET_T *PVIGEM_TARGET;\n\ntypedef VOID(CALLBACK* PVIGEM_X360_NOTIFICATION)(\n    PVIGEM_CLIENT Client,\n    PVIGEM_TARGET Target,\n    UCHAR LargeMotor,\n    UCHAR SmallMotor,\n    UCHAR LedNumber);\n\ntypedef VOID(CALLBACK* PVIGEM_DS4_NOTIFICATION)(\n    PVIGEM_CLIENT Client,\n    PVIGEM_TARGET Target,\n    UCHAR LargeMotor,\n    UCHAR SmallMotor,\n    DS4_LIGHTBAR_COLOR LightbarColor);\n\ntypedef VOID(CALLBACK* PVIGEM_TARGET_ADD_RESULT)(\n    PVIGEM_CLIENT Client,\n    PVIGEM_TARGET Target,\n    VIGEM_ERROR Result);\n\n/**\n * \\fn  PVIGEM_CLIENT vigem_alloc(void);\n *\n * \\brief   Allocates an object representing a driver connection.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\return  A new driver connection object.\n */\nVIGEM_API PVIGEM_CLIENT vigem_alloc(void);\n\n/**\n * \\fn  void vigem_free(PVIGEM_CLIENT vigem);\n *\n * \\brief   Frees up memory used by the driver connection object.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   vigem   The driver connection object.\n */\nVIGEM_API void vigem_free(PVIGEM_CLIENT vigem);\n\n/**\n * \\fn  VIGEM_ERROR vigem_connect(PVIGEM_CLIENT vigem);\n *\n * \\brief   Initializes the driver object and establishes a connection to the emulation bus\n *          driver. Returns an error if no compatible bus device has been found.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   vigem   The driver connection object.\n *\n * \\return  A VIGEM_ERROR.\n */\nVIGEM_API VIGEM_ERROR vigem_connect(PVIGEM_CLIENT vigem);\n\n/**\n * \\fn  void vigem_disconnect(PVIGEM_CLIENT vigem);\n *\n * \\brief   Disconnects from the bus device and resets the driver object state. The driver object\n *          may be reused again after calling this function. When called, all targets which may\n *          still be connected will be destroyed automatically. Be aware, that allocated target\n *          objects won't be automatically freed, this has to be taken care of by the caller.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   vigem   The driver connection object.\n */\nVIGEM_API void vigem_disconnect(PVIGEM_CLIENT vigem);\n\n/**\n * \\fn  PVIGEM_TARGET vigem_target_x360_alloc(void);\n *\n * \\brief   Allocates an object representing an Xbox 360 Controller device.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\return  A PVIGEM_TARGET representing an Xbox 360 Controller device.\n */\nVIGEM_API PVIGEM_TARGET vigem_target_x360_alloc(void);\n\n/**\n * \\fn  PVIGEM_TARGET vigem_target_ds4_alloc(void);\n *\n * \\brief   Allocates an object representing a DualShock 4 Controller device.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\return  A PVIGEM_TARGET representing a DualShock 4 Controller device.\n */\nVIGEM_API PVIGEM_TARGET vigem_target_ds4_alloc(void);\n\n/**\n * \\fn  void vigem_target_free(PVIGEM_TARGET target);\n *\n * \\brief   Frees up memory used by the target device object. This does not automatically remove\n *          the associated device from the bus, if present. If the target device doesn't get\n *          removed before this call, the device becomes orphaned until the owning process is\n *          terminated.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   target  The target device object.\n */\nVIGEM_API void vigem_target_free(PVIGEM_TARGET target);\n\n/**\n * \\fn  VIGEM_ERROR vigem_target_add(PVIGEM_CLIENT vigem, PVIGEM_TARGET target);\n *\n * \\brief   Adds a provided target device to the bus driver, which is equal to a device plug-in\n *          event of a physical hardware device. This function blocks until the target device is\n *          in full operational mode.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   vigem   The driver connection object.\n * \\param   target  The target device object.\n *\n * \\return  A VIGEM_ERROR.\n */\nVIGEM_API VIGEM_ERROR vigem_target_add(PVIGEM_CLIENT vigem, PVIGEM_TARGET target);\n\n/**\n * \\fn  VIGEM_ERROR vigem_target_add_async(PVIGEM_CLIENT vigem, PVIGEM_TARGET target, PVIGEM_TARGET_ADD_RESULT result);\n *\n * \\brief   Adds a provided target device to the bus driver, which is equal to a device plug-in\n *          event of a physical hardware device. This function immediately returns. An optional\n *          callback may be registered which gets called on error or if the target device has\n *          become fully operational.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   vigem   The driver connection object.\n * \\param   target  The target device object.\n * \\param   result  An optional function getting called when the target device becomes available.\n *\n * \\return  A VIGEM_ERROR.\n */\nVIGEM_API VIGEM_ERROR vigem_target_add_async(PVIGEM_CLIENT vigem, PVIGEM_TARGET target, PVIGEM_TARGET_ADD_RESULT result);\n\n/**\n * \\fn  VIGEM_ERROR vigem_target_remove(PVIGEM_CLIENT vigem, PVIGEM_TARGET target);\n *\n * \\brief   Removes a provided target device from the bus driver, which is equal to a device\n *          unplug event of a physical hardware device. The target device object may be reused\n *          after this function is called. If this function is never called on target device\n *          objects, they will be removed from the bus when the owning process terminates.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   vigem   The driver connection object.\n * \\param   target  The target device object.\n *\n * \\return  A VIGEM_ERROR.\n */\nVIGEM_API VIGEM_ERROR vigem_target_remove(PVIGEM_CLIENT vigem, PVIGEM_TARGET target);\n\n/**\n * \\fn  VIGEM_ERROR vigem_target_x360_register_notification(PVIGEM_CLIENT vigem, PVIGEM_TARGET target, PVIGEM_X360_NOTIFICATION notification);\n *\n * \\brief   Registers a function which gets called, when LED index or vibration state changes\n *          occur on the provided target device. This function fails if the provided target\n *          device isn't fully operational or in an erroneous state.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   vigem           The driver connection object.\n * \\param   target          The target device object.\n * \\param   notification    The notification callback.\n *\n * \\return  A VIGEM_ERROR.\n */\nVIGEM_API VIGEM_ERROR vigem_target_x360_register_notification(PVIGEM_CLIENT vigem, PVIGEM_TARGET target, PVIGEM_X360_NOTIFICATION notification);\n\n/**\n * \\fn  VIGEM_ERROR vigem_target_ds4_register_notification(PVIGEM_CLIENT vigem, PVIGEM_TARGET target, PVIGEM_DS4_NOTIFICATION notification);\n *\n * \\brief   Registers a function which gets called, when LightBar or vibration state changes\n *          occur on the provided target device. This function fails if the provided target\n *          device isn't fully operational or in an erroneous state.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   vigem           The driver connection object.\n * \\param   target          The target device object.\n * \\param   notification    The notification callback.\n *\n * \\return  A VIGEM_ERROR.\n */\nVIGEM_API VIGEM_ERROR vigem_target_ds4_register_notification(PVIGEM_CLIENT vigem, PVIGEM_TARGET target, PVIGEM_DS4_NOTIFICATION notification);\n\n/**\n * \\fn  void vigem_target_x360_unregister_notification(PVIGEM_TARGET target);\n *\n * \\brief   Removes a previously registered callback function from the provided target object.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   target  The target device object.\n */\nVIGEM_API void vigem_target_x360_unregister_notification(PVIGEM_TARGET target);\n\n/**\n * \\fn  void vigem_target_ds4_unregister_notification(PVIGEM_TARGET target);\n *\n * \\brief   Removes a previously registered callback function from the provided target object.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   target  The target device object.\n */\nVIGEM_API void vigem_target_ds4_unregister_notification(PVIGEM_TARGET target);\n\n/**\n * \\fn  void vigem_target_set_vid(PVIGEM_TARGET target, USHORT vid);\n *\n * \\brief   Overrides the default Vendor ID value with the provided one.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   target  The target device object.\n * \\param   vid     The Vendor ID to set.\n */\nVIGEM_API void vigem_target_set_vid(PVIGEM_TARGET target, USHORT vid);\n\n/**\n * \\fn  void vigem_target_set_pid(PVIGEM_TARGET target, USHORT pid);\n *\n * \\brief   Overrides the default Product ID value with the provided one.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   target  The target device object.\n * \\param   pid     The Product ID to set.\n */\nVIGEM_API void vigem_target_set_pid(PVIGEM_TARGET target, USHORT pid);\n\n/**\n * \\fn  USHORT vigem_target_get_vid(PVIGEM_TARGET target);\n *\n * \\brief   Returns the Vendor ID of the provided target device object.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   target  The target device object.\n *\n * \\return  The Vendor ID.\n */\nVIGEM_API USHORT vigem_target_get_vid(PVIGEM_TARGET target);\n\n/**\n * \\fn  USHORT vigem_target_get_pid(PVIGEM_TARGET target);\n *\n * \\brief   Returns the Product ID of the provided target device object.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   target  The target device object.\n *\n * \\return  The Product ID.\n */\nVIGEM_API USHORT vigem_target_get_pid(PVIGEM_TARGET target);\n\n/**\n * \\fn  VIGEM_ERROR vigem_target_x360_update(PVIGEM_CLIENT vigem, PVIGEM_TARGET target, XUSB_REPORT report);\n *\n * \\brief   Sends a state report to the provided target device.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   vigem   The driver connection object.\n * \\param   target  The target device object.\n * \\param   report  The report to send to the target device.\n *\n * \\return  A VIGEM_ERROR.\n */\nVIGEM_API VIGEM_ERROR vigem_target_x360_update(PVIGEM_CLIENT vigem, PVIGEM_TARGET target, XUSB_REPORT report);\n\n/**\n * \\fn  VIGEM_ERROR vigem_target_ds4_update(PVIGEM_CLIENT vigem, PVIGEM_TARGET target, DS4_REPORT report);\n *\n * \\brief   Sends a state report to the provided target device.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   vigem   The driver connection object.\n * \\param   target  The target device object.\n * \\param   report  The report to send to the target device.\n *\n * \\return  A VIGEM_ERROR.\n */\nVIGEM_API VIGEM_ERROR vigem_target_ds4_update(PVIGEM_CLIENT vigem, PVIGEM_TARGET target, DS4_REPORT report);\n\n/**\n * \\fn  ULONG vigem_target_get_index(PVIGEM_TARGET target);\n *\n * \\brief   Returns the internal index (serial number) the bus driver assigned to the provided\n *          target device object. Note that this value is specific to the inner workings of the\n *          bus driver, it does not reflect related values like player index or device arrival\n *          order experienced by other APIs. It may be used to identify the target device object\n *          for its lifetime. This value becomes invalid once the target device is removed from\n *          the bus and may change on the next addition of the device.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   target  The target device object.\n *\n * \\return  The internally used index of the target device.\n */\nVIGEM_API ULONG vigem_target_get_index(PVIGEM_TARGET target);\n\n/**\n * \\fn  VIGEM_TARGET_TYPE vigem_target_get_type(PVIGEM_TARGET target);\n *\n * \\brief   Returns the type of the provided target device object.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    28.08.2017\n *\n * \\param   target  The target device object.\n *\n * \\return  A VIGEM_TARGET_TYPE.\n */\nVIGEM_API VIGEM_TARGET_TYPE vigem_target_get_type(PVIGEM_TARGET target);\n\n/**\n * \\fn  BOOL vigem_target_is_attached(PVIGEM_TARGET target);\n *\n * \\brief   Returns TRUE if the provided target device object is currently attached to the bus,\n *          FALSE otherwise.\n *\n * \\author  Benjamin \"Nefarius\" Hglinger\n * \\date    30.08.2017\n *\n * \\param   target  The target device object.\n *\n * \\return  TRUE if device is attached to the bus, FALSE otherwise.\n */\nVIGEM_API BOOL vigem_target_is_attached(PVIGEM_TARGET target);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif // ViGEmClient_h__\n"
  },
  {
    "path": "ThirdParty/ViGEm/Include/ViGEmCommon.h",
    "content": "/*\nMIT License\n\nCopyright (c) 2017 Benjamin \"Nefarius\" Hglinger\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n*/\n\n\n#pragma once\n\n//\n// Represents the desired target type for the emulated device.\n//  \ntypedef enum _VIGEM_TARGET_TYPE\n{\n    // \n    // Microsoft Xbox 360 Controller (wired)\n    // \n    Xbox360Wired,\n    // \n    // Microsoft Xbox One Controller (wired)\n    // \n    XboxOneWired,\n    //\n    // Sony DualShock 4 (wired)\n    // \n    DualShock4Wired\n\n} VIGEM_TARGET_TYPE, *PVIGEM_TARGET_TYPE;\n\n//\n// Possible XUSB report buttons.\n// \ntypedef enum _XUSB_BUTTON\n{\n    XUSB_GAMEPAD_DPAD_UP            = 0x0001,\n    XUSB_GAMEPAD_DPAD_DOWN          = 0x0002,\n    XUSB_GAMEPAD_DPAD_LEFT          = 0x0004,\n    XUSB_GAMEPAD_DPAD_RIGHT         = 0x0008,\n    XUSB_GAMEPAD_START              = 0x0010,\n    XUSB_GAMEPAD_BACK               = 0x0020,\n    XUSB_GAMEPAD_LEFT_THUMB         = 0x0040,\n    XUSB_GAMEPAD_RIGHT_THUMB        = 0x0080,\n    XUSB_GAMEPAD_LEFT_SHOULDER      = 0x0100,\n    XUSB_GAMEPAD_RIGHT_SHOULDER     = 0x0200,\n    XUSB_GAMEPAD_GUIDE              = 0x0400,\n    XUSB_GAMEPAD_A                  = 0x1000,\n    XUSB_GAMEPAD_B                  = 0x2000,\n    XUSB_GAMEPAD_X                  = 0x4000,\n    XUSB_GAMEPAD_Y                  = 0x8000\n\n} XUSB_BUTTON, *PXUSB_BUTTON;\n\n//\n// Represents an XINPUT_GAMEPAD-compatible report structure.\n// \ntypedef struct _XUSB_REPORT\n{\n    USHORT wButtons;\n    BYTE bLeftTrigger;\n    BYTE bRightTrigger;\n    SHORT sThumbLX;\n    SHORT sThumbLY;\n    SHORT sThumbRX;\n    SHORT sThumbRY;\n\n} XUSB_REPORT, *PXUSB_REPORT;\n\n//\n// Initializes a _XUSB_REPORT structure.\n// \nVOID FORCEINLINE XUSB_REPORT_INIT(\n    _Out_ PXUSB_REPORT Report\n)\n{\n    RtlZeroMemory(Report, sizeof(XUSB_REPORT));\n}\n\n//\n// The color value (RGB) of a DualShock 4 Lightbar\n// \ntypedef struct _DS4_LIGHTBAR_COLOR\n{\n    //\n    // Red part of the Lightbar (0-255).\n    //\n    UCHAR Red;\n\n    //\n    // Green part of the Lightbar (0-255).\n    //\n    UCHAR Green;\n\n    //\n    // Blue part of the Lightbar (0-255).\n    //\n    UCHAR Blue;\n\n} DS4_LIGHTBAR_COLOR, *PDS4_LIGHTBAR_COLOR;\n\n//\n// DualShock 4 digital buttons\n// \ntypedef enum _DS4_BUTTONS\n{\n    DS4_BUTTON_THUMB_RIGHT      = 1 << 15,\n    DS4_BUTTON_THUMB_LEFT       = 1 << 14,\n    DS4_BUTTON_OPTIONS          = 1 << 13,\n    DS4_BUTTON_SHARE            = 1 << 12,\n    DS4_BUTTON_TRIGGER_RIGHT    = 1 << 11,\n    DS4_BUTTON_TRIGGER_LEFT     = 1 << 10,\n    DS4_BUTTON_SHOULDER_RIGHT   = 1 << 9,\n    DS4_BUTTON_SHOULDER_LEFT    = 1 << 8,\n    DS4_BUTTON_TRIANGLE         = 1 << 7,\n    DS4_BUTTON_CIRCLE           = 1 << 6,\n    DS4_BUTTON_CROSS            = 1 << 5,\n    DS4_BUTTON_SQUARE           = 1 << 4\n\n} DS4_BUTTONS, *PDS4_BUTTONS;\n\n//\n// DualShock 4 special buttons\n// \ntypedef enum _DS4_SPECIAL_BUTTONS\n{\n    DS4_SPECIAL_BUTTON_PS           = 1 << 0,\n    DS4_SPECIAL_BUTTON_TOUCHPAD     = 1 << 1\n\n} DS4_SPECIAL_BUTTONS, *PDS4_SPECIAL_BUTTONS;\n\n//\n// DualShock 4 directional pad (HAT) values\n// \ntypedef enum _DS4_DPAD_DIRECTIONS\n{\n    DS4_BUTTON_DPAD_NONE        = 0x8,\n    DS4_BUTTON_DPAD_NORTHWEST   = 0x7,\n    DS4_BUTTON_DPAD_WEST        = 0x6,\n    DS4_BUTTON_DPAD_SOUTHWEST   = 0x5,\n    DS4_BUTTON_DPAD_SOUTH       = 0x4,\n    DS4_BUTTON_DPAD_SOUTHEAST   = 0x3,\n    DS4_BUTTON_DPAD_EAST        = 0x2,\n    DS4_BUTTON_DPAD_NORTHEAST   = 0x1,\n    DS4_BUTTON_DPAD_NORTH       = 0x0\n\n} DS4_DPAD_DIRECTIONS, *PDS4_DPAD_DIRECTIONS;\n\n//\n// DualShock 4 HID Input report\n// \ntypedef struct _DS4_REPORT\n{\n    BYTE bThumbLX;\n    BYTE bThumbLY;\n    BYTE bThumbRX;\n    BYTE bThumbRY;\n    USHORT wButtons;\n    BYTE bSpecial;\n    BYTE bTriggerL;\n    BYTE bTriggerR;\n\n} DS4_REPORT, *PDS4_REPORT;\n\n//\n// Sets the current state of the D-PAD on a DualShock 4 report.\n// \nVOID FORCEINLINE DS4_SET_DPAD(\n    _Out_ PDS4_REPORT Report,\n    _In_ DS4_DPAD_DIRECTIONS Dpad\n)\n{\n    Report->wButtons &= ~0xF;\n    Report->wButtons |= (USHORT)Dpad;\n}\n\nVOID FORCEINLINE DS4_REPORT_INIT(\n    _Out_ PDS4_REPORT Report\n)\n{\n    RtlZeroMemory(Report, sizeof(DS4_REPORT));\n\n    Report->bThumbLX = 0x80;\n    Report->bThumbLY = 0x80;\n    Report->bThumbRX = 0x80;\n    Report->bThumbRY = 0x80;\n\n    DS4_SET_DPAD(Report, DS4_BUTTON_DPAD_NONE);\n}\n\n"
  },
  {
    "path": "ThirdParty/ViGEm/Include/ViGEmUtil.h",
    "content": "#pragma once\n\n#include \"ViGEmCommon.h\"\n#include <limits.h>\n\nVOID FORCEINLINE XUSB_TO_DS4_REPORT(\n    _Out_ PXUSB_REPORT Input,\n    _Out_ PDS4_REPORT Output\n)\n{\n    if (Input->wButtons & XUSB_GAMEPAD_BACK) Output->wButtons |= DS4_BUTTON_SHARE;\n    if (Input->wButtons & XUSB_GAMEPAD_START) Output->wButtons |= DS4_BUTTON_OPTIONS;\n    if (Input->wButtons & XUSB_GAMEPAD_LEFT_THUMB) Output->wButtons |= DS4_BUTTON_THUMB_LEFT;\n    if (Input->wButtons & XUSB_GAMEPAD_RIGHT_THUMB) Output->wButtons |= DS4_BUTTON_THUMB_RIGHT;\n    if (Input->wButtons & XUSB_GAMEPAD_LEFT_SHOULDER) Output->wButtons |= DS4_BUTTON_SHOULDER_LEFT;\n    if (Input->wButtons & XUSB_GAMEPAD_RIGHT_SHOULDER) Output->wButtons |= DS4_BUTTON_SHOULDER_RIGHT;\n    if (Input->wButtons & XUSB_GAMEPAD_GUIDE) Output->bSpecial |= DS4_SPECIAL_BUTTON_PS;\n    if (Input->wButtons & XUSB_GAMEPAD_A) Output->wButtons |= DS4_BUTTON_CROSS;\n    if (Input->wButtons & XUSB_GAMEPAD_B) Output->wButtons |= DS4_BUTTON_CIRCLE;\n    if (Input->wButtons & XUSB_GAMEPAD_X) Output->wButtons |= DS4_BUTTON_SQUARE;\n    if (Input->wButtons & XUSB_GAMEPAD_Y) Output->wButtons |= DS4_BUTTON_TRIANGLE;\n\n    Output->bTriggerL = Input->bLeftTrigger;\n    Output->bTriggerR = Input->bRightTrigger;\n\n    if (Input->bLeftTrigger > 0)Output->wButtons |= DS4_BUTTON_TRIGGER_LEFT;\n    if (Input->bRightTrigger > 0)Output->wButtons |= DS4_BUTTON_TRIGGER_RIGHT;\n\n    if (Input->wButtons & XUSB_GAMEPAD_DPAD_UP) DS4_SET_DPAD(Output, DS4_BUTTON_DPAD_NORTH);\n    if (Input->wButtons & XUSB_GAMEPAD_DPAD_RIGHT) DS4_SET_DPAD(Output, DS4_BUTTON_DPAD_EAST);\n    if (Input->wButtons & XUSB_GAMEPAD_DPAD_DOWN) DS4_SET_DPAD(Output, DS4_BUTTON_DPAD_SOUTH);\n    if (Input->wButtons & XUSB_GAMEPAD_DPAD_LEFT) DS4_SET_DPAD(Output, DS4_BUTTON_DPAD_WEST);\n\n    if (Input->wButtons & XUSB_GAMEPAD_DPAD_UP\n        && Input->wButtons & XUSB_GAMEPAD_DPAD_RIGHT) DS4_SET_DPAD(Output, DS4_BUTTON_DPAD_NORTHEAST);\n    if (Input->wButtons & XUSB_GAMEPAD_DPAD_RIGHT\n        && Input->wButtons & XUSB_GAMEPAD_DPAD_DOWN) DS4_SET_DPAD(Output, DS4_BUTTON_DPAD_SOUTHEAST);\n    if (Input->wButtons & XUSB_GAMEPAD_DPAD_DOWN\n        && Input->wButtons & XUSB_GAMEPAD_DPAD_LEFT) DS4_SET_DPAD(Output, DS4_BUTTON_DPAD_SOUTHWEST);\n    if (Input->wButtons & XUSB_GAMEPAD_DPAD_LEFT\n        && Input->wButtons & XUSB_GAMEPAD_DPAD_UP) DS4_SET_DPAD(Output, DS4_BUTTON_DPAD_NORTHWEST);\n\n    Output->bThumbLX = ((Input->sThumbLX + ((USHRT_MAX / 2) + 1)) / 257);\n    Output->bThumbLY = (-(Input->sThumbLY + ((USHRT_MAX / 2) - 1)) / 257);\n    Output->bThumbLY = (Output->bThumbLY == 0) ? 0xFF : Output->bThumbLY;\n    Output->bThumbRX = ((Input->sThumbRX + ((USHRT_MAX / 2) + 1)) / 257);\n    Output->bThumbRY = (-(Input->sThumbRY + ((USHRT_MAX / 2) + 1)) / 257);\n    Output->bThumbRY = (Output->bThumbRY == 0) ? 0xFF : Output->bThumbRY;\n}\n\n"
  },
  {
    "path": "ThirdParty/ViGEm/Include/XInputOverrides.h",
    "content": "/*\nMIT License\n\nCopyright (c) 2016 Benjamin \"Nefarius\" Hglinger\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n*/\n\n\n#pragma once\n\ntypedef enum _XINPUT_GAMEPAD_OVERRIDES\n{\n    XINPUT_GAMEPAD_OVERRIDE_DPAD_UP             = 0x00000001,\n    XINPUT_GAMEPAD_OVERRIDE_DPAD_DOWN           = 0x00000002,\n    XINPUT_GAMEPAD_OVERRIDE_DPAD_LEFT           = 0x00000004,\n    XINPUT_GAMEPAD_OVERRIDE_DPAD_RIGHT          = 0x00000008,\n    XINPUT_GAMEPAD_OVERRIDE_START               = 0x00000010,\n    XINPUT_GAMEPAD_OVERRIDE_BACK                = 0x00000020,\n    XINPUT_GAMEPAD_OVERRIDE_LEFT_THUMB          = 0x00000040,\n    XINPUT_GAMEPAD_OVERRIDE_RIGHT_THUMB         = 0x00000080,\n    XINPUT_GAMEPAD_OVERRIDE_LEFT_SHOULDER       = 0x00000100,\n    XINPUT_GAMEPAD_OVERRIDE_RIGHT_SHOULDER      = 0x00000200,\n    XINPUT_GAMEPAD_OVERRIDE_A                   = 0x00001000,\n    XINPUT_GAMEPAD_OVERRIDE_B                   = 0x00002000,\n    XINPUT_GAMEPAD_OVERRIDE_X                   = 0x00004000,\n    XINPUT_GAMEPAD_OVERRIDE_Y                   = 0x00008000,\n    XINPUT_GAMEPAD_OVERRIDE_LEFT_TRIGGER        = 0x00010000,\n    XINPUT_GAMEPAD_OVERRIDE_RIGHT_TRIGGER       = 0x00020000,\n    XINPUT_GAMEPAD_OVERRIDE_LEFT_THUMB_X        = 0x00040000,\n    XINPUT_GAMEPAD_OVERRIDE_LEFT_THUMB_Y        = 0x00080000,\n    XINPUT_GAMEPAD_OVERRIDE_RIGHT_THUMB_X       = 0x00100000,\n    XINPUT_GAMEPAD_OVERRIDE_RIGHT_THUMB_Y       = 0x00200000\n} XINPUT_GAMEPAD_OVERRIDES, *PXINPUT_GAMEPAD_OVERRIDES;\n\n"
  },
  {
    "path": "ThirdParty/ViGEm/Include/XnaGuardianShared.h",
    "content": "/*\nMIT License\n\nCopyright (c) 2016 Benjamin \"Nefarius\" Hglinger\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n*/\n\n\n#pragma once\n\n#define XNA_GUARDIAN_DEVICE_PATH    TEXT(\"\\\\\\\\.\\\\XnaGuardian\")\n\n#define XINPUT_MAX_DEVICES          0x04\n\n#define VALID_USER_INDEX(_index_)   ((_index_ >= 0) && (_index_ < XINPUT_MAX_DEVICES))\n\n//\n// Custom extensions\n// \n#define XINPUT_EXT_TYPE                         0x8001\n#define XINPUT_EXT_CODE                         0x801\n\n#define IOCTL_XINPUT_EXT_OVERRIDE_GAMEPAD_STATE CTL_CODE(XINPUT_EXT_TYPE, XINPUT_EXT_CODE + 0x01, METHOD_BUFFERED, FILE_WRITE_DATA)\n#define IOCTL_XINPUT_EXT_PEEK_GAMEPAD_STATE     CTL_CODE(XINPUT_EXT_TYPE, XINPUT_EXT_CODE + 0x02, METHOD_BUFFERED, FILE_READ_DATA | FILE_WRITE_DATA)\n\n\n//\n// State of the gamepad (compatible to XINPUT_GAMEPAD)\n// \ntypedef struct _XINPUT_GAMEPAD_STATE {\n    USHORT wButtons;\n    BYTE   bLeftTrigger;\n    BYTE   bRightTrigger;\n    SHORT  sThumbLX;\n    SHORT  sThumbLY;\n    SHORT  sThumbRX;\n    SHORT  sThumbRY;\n} XINPUT_GAMEPAD_STATE, *PXINPUT_GAMEPAD_STATE;\n\n//\n// Context data for IOCTL_XINPUT_EXT_OVERRIDE_GAMEPAD_STATE I/O control code\n// \ntypedef struct _XINPUT_EXT_OVERRIDE_GAMEPAD\n{\n    IN ULONG Size;\n\n    IN UCHAR UserIndex;\n\n    IN ULONG Overrides;\n\n    IN XINPUT_GAMEPAD_STATE Gamepad;\n\n} XINPUT_EXT_OVERRIDE_GAMEPAD, *PXINPUT_EXT_OVERRIDE_GAMEPAD;\n\nVOID FORCEINLINE XINPUT_EXT_OVERRIDE_GAMEPAD_INIT(\n    _Out_ PXINPUT_EXT_OVERRIDE_GAMEPAD OverrideGamepad,\n    _In_ UCHAR UserIndex\n)\n{\n    RtlZeroMemory(OverrideGamepad, sizeof(XINPUT_EXT_OVERRIDE_GAMEPAD));\n\n    OverrideGamepad->Size = sizeof(XINPUT_EXT_OVERRIDE_GAMEPAD);\n    OverrideGamepad->UserIndex = UserIndex;\n}\n\n//\n// Context data for IOCTL_XINPUT_EXT_PEEK_GAMEPAD_STATE I/O control code\n// \ntypedef struct _XINPUT_EXT_PEEK_GAMEPAD\n{\n    IN ULONG Size;\n\n    IN UCHAR UserIndex;\n\n} XINPUT_EXT_PEEK_GAMEPAD, *PXINPUT_EXT_PEEK_GAMEPAD;\n\nVOID FORCEINLINE XINPUT_EXT_PEEK_GAMEPAD_INIT(\n    _Out_ PXINPUT_EXT_PEEK_GAMEPAD PeekGamepad,\n    _In_ UCHAR UserIndex\n)\n{\n    RtlZeroMemory(PeekGamepad, sizeof(XINPUT_EXT_PEEK_GAMEPAD));\n\n    PeekGamepad->Size = sizeof(XINPUT_EXT_PEEK_GAMEPAD);\n    PeekGamepad->UserIndex = UserIndex;\n}\n\n\n"
  },
  {
    "path": "ThirdParty/ViGEm/LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2016 Benjamin Höglinger\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "ThirdParty/ffmpeg/LICENSE.txt",
    "content": "                    GNU GENERAL PUBLIC LICENSE\n                       Version 3, 29 June 2007\n\n Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n                            Preamble\n\n  The GNU General Public License is a free, copyleft license for\nsoftware and other kinds of works.\n\n  The licenses for most software and other practical works are designed\nto take away your freedom to share and change the works.  By contrast,\nthe GNU General Public License is intended to guarantee your freedom to\nshare and change all versions of a program--to make sure it remains free\nsoftware for all its users.  We, the Free Software Foundation, use the\nGNU General Public License for most of our software; it applies also to\nany other work released this way by its authors.  You can apply it to\nyour programs, too.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthem if you wish), that you receive source code or can get it if you\nwant it, that you can change the software or use pieces of it in new\nfree programs, and that you know you can do these things.\n\n  To protect your rights, we need to prevent others from denying you\nthese rights or asking you to surrender the rights.  Therefore, you have\ncertain responsibilities if you distribute copies of the software, or if\nyou modify it: responsibilities to respect the freedom of others.\n\n  For example, if you distribute copies of such a program, whether\ngratis or for a fee, you must pass on to the recipients the same\nfreedoms that you received.  You must make sure that they, too, receive\nor can get the source code.  And you must show them these terms so they\nknow their rights.\n\n  Developers that use the GNU GPL protect your rights with two steps:\n(1) assert copyright on the software, and (2) offer you this License\ngiving you legal permission to copy, distribute and/or modify it.\n\n  For the developers' and authors' protection, the GPL clearly explains\nthat there is no warranty for this free software.  For both users' and\nauthors' sake, the GPL requires that modified versions be marked as\nchanged, so that their problems will not be attributed erroneously to\nauthors of previous versions.\n\n  Some devices are designed to deny users access to install or run\nmodified versions of the software inside them, although the manufacturer\ncan do so.  This is fundamentally incompatible with the aim of\nprotecting users' freedom to change the software.  The systematic\npattern of such abuse occurs in the area of products for individuals to\nuse, which is precisely where it is most unacceptable.  Therefore, we\nhave designed this version of the GPL to prohibit the practice for those\nproducts.  If such problems arise substantially in other domains, we\nstand ready to extend this provision to those domains in future versions\nof the GPL, as needed to protect the freedom of users.\n\n  Finally, every program is threatened constantly by software patents.\nStates should not allow patents to restrict development and use of\nsoftware on general-purpose computers, but in those that do, we wish to\navoid the special danger that patents applied to a free program could\nmake it effectively proprietary.  To prevent this, the GPL assures that\npatents cannot be used to render the program non-free.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n                       TERMS AND CONDITIONS\n\n  0. Definitions.\n\n  \"This License\" refers to version 3 of the GNU General Public License.\n\n  \"Copyright\" also means copyright-like laws that apply to other kinds of\nworks, such as semiconductor masks.\n\n  \"The Program\" refers to any copyrightable work licensed under this\nLicense.  Each licensee is addressed as \"you\".  \"Licensees\" and\n\"recipients\" may be individuals or organizations.\n\n  To \"modify\" a work means to copy from or adapt all or part of the work\nin a fashion requiring copyright permission, other than the making of an\nexact copy.  The resulting work is called a \"modified version\" of the\nearlier work or a work \"based on\" the earlier work.\n\n  A \"covered work\" means either the unmodified Program or a work based\non the Program.\n\n  To \"propagate\" a work means to do anything with it that, without\npermission, would make you directly or secondarily liable for\ninfringement under applicable copyright law, except executing it on a\ncomputer or modifying a private copy.  Propagation includes copying,\ndistribution (with or without modification), making available to the\npublic, and in some countries other activities as well.\n\n  To \"convey\" a work means any kind of propagation that enables other\nparties to make or receive copies.  Mere interaction with a user through\na computer network, with no transfer of a copy, is not conveying.\n\n  An interactive user interface displays \"Appropriate Legal Notices\"\nto the extent that it includes a convenient and prominently visible\nfeature that (1) displays an appropriate copyright notice, and (2)\ntells the user that there is no warranty for the work (except to the\nextent that warranties are provided), that licensees may convey the\nwork under this License, and how to view a copy of this License.  If\nthe interface presents a list of user commands or options, such as a\nmenu, a prominent item in the list meets this criterion.\n\n  1. Source Code.\n\n  The \"source code\" for a work means the preferred form of the work\nfor making modifications to it.  \"Object code\" means any non-source\nform of a work.\n\n  A \"Standard Interface\" means an interface that either is an official\nstandard defined by a recognized standards body, or, in the case of\ninterfaces specified for a particular programming language, one that\nis widely used among developers working in that language.\n\n  The \"System Libraries\" of an executable work include anything, other\nthan the work as a whole, that (a) is included in the normal form of\npackaging a Major Component, but which is not part of that Major\nComponent, and (b) serves only to enable use of the work with that\nMajor Component, or to implement a Standard Interface for which an\nimplementation is available to the public in source code form.  A\n\"Major Component\", in this context, means a major essential component\n(kernel, window system, and so on) of the specific operating system\n(if any) on which the executable work runs, or a compiler used to\nproduce the work, or an object code interpreter used to run it.\n\n  The \"Corresponding Source\" for a work in object code form means all\nthe source code needed to generate, install, and (for an executable\nwork) run the object code and to modify the work, including scripts to\ncontrol those activities.  However, it does not include the work's\nSystem Libraries, or general-purpose tools or generally available free\nprograms which are used unmodified in performing those activities but\nwhich are not part of the work.  For example, Corresponding Source\nincludes interface definition files associated with source files for\nthe work, and the source code for shared libraries and dynamically\nlinked subprograms that the work is specifically designed to require,\nsuch as by intimate data communication or control flow between those\nsubprograms and other parts of the work.\n\n  The Corresponding Source need not include anything that users\ncan regenerate automatically from other parts of the Corresponding\nSource.\n\n  The Corresponding Source for a work in source code form is that\nsame work.\n\n  2. Basic Permissions.\n\n  All rights granted under this License are granted for the term of\ncopyright on the Program, and are irrevocable provided the stated\nconditions are met.  This License explicitly affirms your unlimited\npermission to run the unmodified Program.  The output from running a\ncovered work is covered by this License only if the output, given its\ncontent, constitutes a covered work.  This License acknowledges your\nrights of fair use or other equivalent, as provided by copyright law.\n\n  You may make, run and propagate covered works that you do not\nconvey, without conditions so long as your license otherwise remains\nin force.  You may convey covered works to others for the sole purpose\nof having them make modifications exclusively for you, or provide you\nwith facilities for running those works, provided that you comply with\nthe terms of this License in conveying all material for which you do\nnot control copyright.  Those thus making or running the covered works\nfor you must do so exclusively on your behalf, under your direction\nand control, on terms that prohibit them from making any copies of\nyour copyrighted material outside their relationship with you.\n\n  Conveying under any other circumstances is permitted solely under\nthe conditions stated below.  Sublicensing is not allowed; section 10\nmakes it unnecessary.\n\n  3. Protecting Users' Legal Rights From Anti-Circumvention Law.\n\n  No covered work shall be deemed part of an effective technological\nmeasure under any applicable law fulfilling obligations under article\n11 of the WIPO copyright treaty adopted on 20 December 1996, or\nsimilar laws prohibiting or restricting circumvention of such\nmeasures.\n\n  When you convey a covered work, you waive any legal power to forbid\ncircumvention of technological measures to the extent such circumvention\nis effected by exercising rights under this License with respect to\nthe covered work, and you disclaim any intention to limit operation or\nmodification of the work as a means of enforcing, against the work's\nusers, your or third parties' legal rights to forbid circumvention of\ntechnological measures.\n\n  4. Conveying Verbatim Copies.\n\n  You may convey verbatim copies of the Program's source code as you\nreceive it, in any medium, provided that you conspicuously and\nappropriately publish on each copy an appropriate copyright notice;\nkeep intact all notices stating that this License and any\nnon-permissive terms added in accord with section 7 apply to the code;\nkeep intact all notices of the absence of any warranty; and give all\nrecipients a copy of this License along with the Program.\n\n  You may charge any price or no price for each copy that you convey,\nand you may offer support or warranty protection for a fee.\n\n  5. Conveying Modified Source Versions.\n\n  You may convey a work based on the Program, or the modifications to\nproduce it from the Program, in the form of source code under the\nterms of section 4, provided that you also meet all of these conditions:\n\n    a) The work must carry prominent notices stating that you modified\n    it, and giving a relevant date.\n\n    b) The work must carry prominent notices stating that it is\n    released under this License and any conditions added under section\n    7.  This requirement modifies the requirement in section 4 to\n    \"keep intact all notices\".\n\n    c) You must license the entire work, as a whole, under this\n    License to anyone who comes into possession of a copy.  This\n    License will therefore apply, along with any applicable section 7\n    additional terms, to the whole of the work, and all its parts,\n    regardless of how they are packaged.  This License gives no\n    permission to license the work in any other way, but it does not\n    invalidate such permission if you have separately received it.\n\n    d) If the work has interactive user interfaces, each must display\n    Appropriate Legal Notices; however, if the Program has interactive\n    interfaces that do not display Appropriate Legal Notices, your\n    work need not make them do so.\n\n  A compilation of a covered work with other separate and independent\nworks, which are not by their nature extensions of the covered work,\nand which are not combined with it such as to form a larger program,\nin or on a volume of a storage or distribution medium, is called an\n\"aggregate\" if the compilation and its resulting copyright are not\nused to limit the access or legal rights of the compilation's users\nbeyond what the individual works permit.  Inclusion of a covered work\nin an aggregate does not cause this License to apply to the other\nparts of the aggregate.\n\n  6. Conveying Non-Source Forms.\n\n  You may convey a covered work in object code form under the terms\nof sections 4 and 5, provided that you also convey the\nmachine-readable Corresponding Source under the terms of this License,\nin one of these ways:\n\n    a) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by the\n    Corresponding Source fixed on a durable physical medium\n    customarily used for software interchange.\n\n    b) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by a\n    written offer, valid for at least three years and valid for as\n    long as you offer spare parts or customer support for that product\n    model, to give anyone who possesses the object code either (1) a\n    copy of the Corresponding Source for all the software in the\n    product that is covered by this License, on a durable physical\n    medium customarily used for software interchange, for a price no\n    more than your reasonable cost of physically performing this\n    conveying of source, or (2) access to copy the\n    Corresponding Source from a network server at no charge.\n\n    c) Convey individual copies of the object code with a copy of the\n    written offer to provide the Corresponding Source.  This\n    alternative is allowed only occasionally and noncommercially, and\n    only if you received the object code with such an offer, in accord\n    with subsection 6b.\n\n    d) Convey the object code by offering access from a designated\n    place (gratis or for a charge), and offer equivalent access to the\n    Corresponding Source in the same way through the same place at no\n    further charge.  You need not require recipients to copy the\n    Corresponding Source along with the object code.  If the place to\n    copy the object code is a network server, the Corresponding Source\n    may be on a different server (operated by you or a third party)\n    that supports equivalent copying facilities, provided you maintain\n    clear directions next to the object code saying where to find the\n    Corresponding Source.  Regardless of what server hosts the\n    Corresponding Source, you remain obligated to ensure that it is\n    available for as long as needed to satisfy these requirements.\n\n    e) Convey the object code using peer-to-peer transmission, provided\n    you inform other peers where the object code and Corresponding\n    Source of the work are being offered to the general public at no\n    charge under subsection 6d.\n\n  A separable portion of the object code, whose source code is excluded\nfrom the Corresponding Source as a System Library, need not be\nincluded in conveying the object code work.\n\n  A \"User Product\" is either (1) a \"consumer product\", which means any\ntangible personal property which is normally used for personal, family,\nor household purposes, or (2) anything designed or sold for incorporation\ninto a dwelling.  In determining whether a product is a consumer product,\ndoubtful cases shall be resolved in favor of coverage.  For a particular\nproduct received by a particular user, \"normally used\" refers to a\ntypical or common use of that class of product, regardless of the status\nof the particular user or of the way in which the particular user\nactually uses, or expects or is expected to use, the product.  A product\nis a consumer product regardless of whether the product has substantial\ncommercial, industrial or non-consumer uses, unless such uses represent\nthe only significant mode of use of the product.\n\n  \"Installation Information\" for a User Product means any methods,\nprocedures, authorization keys, or other information required to install\nand execute modified versions of a covered work in that User Product from\na modified version of its Corresponding Source.  The information must\nsuffice to ensure that the continued functioning of the modified object\ncode is in no case prevented or interfered with solely because\nmodification has been made.\n\n  If you convey an object code work under this section in, or with, or\nspecifically for use in, a User Product, and the conveying occurs as\npart of a transaction in which the right of possession and use of the\nUser Product is transferred to the recipient in perpetuity or for a\nfixed term (regardless of how the transaction is characterized), the\nCorresponding Source conveyed under this section must be accompanied\nby the Installation Information.  But this requirement does not apply\nif neither you nor any third party retains the ability to install\nmodified object code on the User Product (for example, the work has\nbeen installed in ROM).\n\n  The requirement to provide Installation Information does not include a\nrequirement to continue to provide support service, warranty, or updates\nfor a work that has been modified or installed by the recipient, or for\nthe User Product in which it has been modified or installed.  Access to a\nnetwork may be denied when the modification itself materially and\nadversely affects the operation of the network or violates the rules and\nprotocols for communication across the network.\n\n  Corresponding Source conveyed, and Installation Information provided,\nin accord with this section must be in a format that is publicly\ndocumented (and with an implementation available to the public in\nsource code form), and must require no special password or key for\nunpacking, reading or copying.\n\n  7. Additional Terms.\n\n  \"Additional permissions\" are terms that supplement the terms of this\nLicense by making exceptions from one or more of its conditions.\nAdditional permissions that are applicable to the entire Program shall\nbe treated as though they were included in this License, to the extent\nthat they are valid under applicable law.  If additional permissions\napply only to part of the Program, that part may be used separately\nunder those permissions, but the entire Program remains governed by\nthis License without regard to the additional permissions.\n\n  When you convey a copy of a covered work, you may at your option\nremove any additional permissions from that copy, or from any part of\nit.  (Additional permissions may be written to require their own\nremoval in certain cases when you modify the work.)  You may place\nadditional permissions on material, added by you to a covered work,\nfor which you have or can give appropriate copyright permission.\n\n  Notwithstanding any other provision of this License, for material you\nadd to a covered work, you may (if authorized by the copyright holders of\nthat material) supplement the terms of this License with terms:\n\n    a) Disclaiming warranty or limiting liability differently from the\n    terms of sections 15 and 16 of this License; or\n\n    b) Requiring preservation of specified reasonable legal notices or\n    author attributions in that material or in the Appropriate Legal\n    Notices displayed by works containing it; or\n\n    c) Prohibiting misrepresentation of the origin of that material, or\n    requiring that modified versions of such material be marked in\n    reasonable ways as different from the original version; or\n\n    d) Limiting the use for publicity purposes of names of licensors or\n    authors of the material; or\n\n    e) Declining to grant rights under trademark law for use of some\n    trade names, trademarks, or service marks; or\n\n    f) Requiring indemnification of licensors and authors of that\n    material by anyone who conveys the material (or modified versions of\n    it) with contractual assumptions of liability to the recipient, for\n    any liability that these contractual assumptions directly impose on\n    those licensors and authors.\n\n  All other non-permissive additional terms are considered \"further\nrestrictions\" within the meaning of section 10.  If the Program as you\nreceived it, or any part of it, contains a notice stating that it is\ngoverned by this License along with a term that is a further\nrestriction, you may remove that term.  If a license document contains\na further restriction but permits relicensing or conveying under this\nLicense, you may add to a covered work material governed by the terms\nof that license document, provided that the further restriction does\nnot survive such relicensing or conveying.\n\n  If you add terms to a covered work in accord with this section, you\nmust place, in the relevant source files, a statement of the\nadditional terms that apply to those files, or a notice indicating\nwhere to find the applicable terms.\n\n  Additional terms, permissive or non-permissive, may be stated in the\nform of a separately written license, or stated as exceptions;\nthe above requirements apply either way.\n\n  8. Termination.\n\n  You may not propagate or modify a covered work except as expressly\nprovided under this License.  Any attempt otherwise to propagate or\nmodify it is void, and will automatically terminate your rights under\nthis License (including any patent licenses granted under the third\nparagraph of section 11).\n\n  However, if you cease all violation of this License, then your\nlicense from a particular copyright holder is reinstated (a)\nprovisionally, unless and until the copyright holder explicitly and\nfinally terminates your license, and (b) permanently, if the copyright\nholder fails to notify you of the violation by some reasonable means\nprior to 60 days after the cessation.\n\n  Moreover, your license from a particular copyright holder is\nreinstated permanently if the copyright holder notifies you of the\nviolation by some reasonable means, this is the first time you have\nreceived notice of violation of this License (for any work) from that\ncopyright holder, and you cure the violation prior to 30 days after\nyour receipt of the notice.\n\n  Termination of your rights under this section does not terminate the\nlicenses of parties who have received copies or rights from you under\nthis License.  If your rights have been terminated and not permanently\nreinstated, you do not qualify to receive new licenses for the same\nmaterial under section 10.\n\n  9. Acceptance Not Required for Having Copies.\n\n  You are not required to accept this License in order to receive or\nrun a copy of the Program.  Ancillary propagation of a covered work\noccurring solely as a consequence of using peer-to-peer transmission\nto receive a copy likewise does not require acceptance.  However,\nnothing other than this License grants you permission to propagate or\nmodify any covered work.  These actions infringe copyright if you do\nnot accept this License.  Therefore, by modifying or propagating a\ncovered work, you indicate your acceptance of this License to do so.\n\n  10. Automatic Licensing of Downstream Recipients.\n\n  Each time you convey a covered work, the recipient automatically\nreceives a license from the original licensors, to run, modify and\npropagate that work, subject to this License.  You are not responsible\nfor enforcing compliance by third parties with this License.\n\n  An \"entity transaction\" is a transaction transferring control of an\norganization, or substantially all assets of one, or subdividing an\norganization, or merging organizations.  If propagation of a covered\nwork results from an entity transaction, each party to that\ntransaction who receives a copy of the work also receives whatever\nlicenses to the work the party's predecessor in interest had or could\ngive under the previous paragraph, plus a right to possession of the\nCorresponding Source of the work from the predecessor in interest, if\nthe predecessor has it or can get it with reasonable efforts.\n\n  You may not impose any further restrictions on the exercise of the\nrights granted or affirmed under this License.  For example, you may\nnot impose a license fee, royalty, or other charge for exercise of\nrights granted under this License, and you may not initiate litigation\n(including a cross-claim or counterclaim in a lawsuit) alleging that\nany patent claim is infringed by making, using, selling, offering for\nsale, or importing the Program or any portion of it.\n\n  11. Patents.\n\n  A \"contributor\" is a copyright holder who authorizes use under this\nLicense of the Program or a work on which the Program is based.  The\nwork thus licensed is called the contributor's \"contributor version\".\n\n  A contributor's \"essential patent claims\" are all patent claims\nowned or controlled by the contributor, whether already acquired or\nhereafter acquired, that would be infringed by some manner, permitted\nby this License, of making, using, or selling its contributor version,\nbut do not include claims that would be infringed only as a\nconsequence of further modification of the contributor version.  For\npurposes of this definition, \"control\" includes the right to grant\npatent sublicenses in a manner consistent with the requirements of\nthis License.\n\n  Each contributor grants you a non-exclusive, worldwide, royalty-free\npatent license under the contributor's essential patent claims, to\nmake, use, sell, offer for sale, import and otherwise run, modify and\npropagate the contents of its contributor version.\n\n  In the following three paragraphs, a \"patent license\" is any express\nagreement or commitment, however denominated, not to enforce a patent\n(such as an express permission to practice a patent or covenant not to\nsue for patent infringement).  To \"grant\" such a patent license to a\nparty means to make such an agreement or commitment not to enforce a\npatent against the party.\n\n  If you convey a covered work, knowingly relying on a patent license,\nand the Corresponding Source of the work is not available for anyone\nto copy, free of charge and under the terms of this License, through a\npublicly available network server or other readily accessible means,\nthen you must either (1) cause the Corresponding Source to be so\navailable, or (2) arrange to deprive yourself of the benefit of the\npatent license for this particular work, or (3) arrange, in a manner\nconsistent with the requirements of this License, to extend the patent\nlicense to downstream recipients.  \"Knowingly relying\" means you have\nactual knowledge that, but for the patent license, your conveying the\ncovered work in a country, or your recipient's use of the covered work\nin a country, would infringe one or more identifiable patents in that\ncountry that you have reason to believe are valid.\n\n  If, pursuant to or in connection with a single transaction or\narrangement, you convey, or propagate by procuring conveyance of, a\ncovered work, and grant a patent license to some of the parties\nreceiving the covered work authorizing them to use, propagate, modify\nor convey a specific copy of the covered work, then the patent license\nyou grant is automatically extended to all recipients of the covered\nwork and works based on it.\n\n  A patent license is \"discriminatory\" if it does not include within\nthe scope of its coverage, prohibits the exercise of, or is\nconditioned on the non-exercise of one or more of the rights that are\nspecifically granted under this License.  You may not convey a covered\nwork if you are a party to an arrangement with a third party that is\nin the business of distributing software, under which you make payment\nto the third party based on the extent of your activity of conveying\nthe work, and under which the third party grants, to any of the\nparties who would receive the covered work from you, a discriminatory\npatent license (a) in connection with copies of the covered work\nconveyed by you (or copies made from those copies), or (b) primarily\nfor and in connection with specific products or compilations that\ncontain the covered work, unless you entered into that arrangement,\nor that patent license was granted, prior to 28 March 2007.\n\n  Nothing in this License shall be construed as excluding or limiting\nany implied license or other defenses to infringement that may\notherwise be available to you under applicable patent law.\n\n  12. No Surrender of Others' Freedom.\n\n  If conditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot convey a\ncovered work so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you may\nnot convey it at all.  For example, if you agree to terms that obligate you\nto collect a royalty for further conveying from those to whom you convey\nthe Program, the only way you could satisfy both those terms and this\nLicense would be to refrain entirely from conveying the Program.\n\n  13. Use with the GNU Affero General Public License.\n\n  Notwithstanding any other provision of this License, you have\npermission to link or combine any covered work with a work licensed\nunder version 3 of the GNU Affero General Public License into a single\ncombined work, and to convey the resulting work.  The terms of this\nLicense will continue to apply to the part which is the covered work,\nbut the special requirements of the GNU Affero General Public License,\nsection 13, concerning interaction through a network will apply to the\ncombination as such.\n\n  14. Revised Versions of this License.\n\n  The Free Software Foundation may publish revised and/or new versions of\nthe GNU General Public License from time to time.  Such new versions will\nbe similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\n  Each version is given a distinguishing version number.  If the\nProgram specifies that a certain numbered version of the GNU General\nPublic License \"or any later version\" applies to it, you have the\noption of following the terms and conditions either of that numbered\nversion or of any later version published by the Free Software\nFoundation.  If the Program does not specify a version number of the\nGNU General Public License, you may choose any version ever published\nby the Free Software Foundation.\n\n  If the Program specifies that a proxy can decide which future\nversions of the GNU General Public License can be used, that proxy's\npublic statement of acceptance of a version permanently authorizes you\nto choose that version for the Program.\n\n  Later license versions may give you additional or different\npermissions.  However, no additional obligations are imposed on any\nauthor or copyright holder as a result of your choosing to follow a\nlater version.\n\n  15. Disclaimer of Warranty.\n\n  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY\nAPPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT\nHOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY\nOF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,\nTHE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM\nIS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF\nALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n  16. Limitation of Liability.\n\n  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS\nTHE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY\nGENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE\nUSE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF\nDATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD\nPARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),\nEVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGES.\n\n  17. Interpretation of Sections 15 and 16.\n\n  If the disclaimer of warranty and limitation of liability provided\nabove cannot be given local legal effect according to their terms,\nreviewing courts shall apply local law that most closely approximates\nan absolute waiver of all civil liability in connection with the\nProgram, unless a warranty or assumption of liability accompanies a\ncopy of the Program in return for a fee.\n\n                     END OF TERMS AND CONDITIONS\n\n            How to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nstate the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n    <one line to give the program's name and a brief idea of what it does.>\n    Copyright (C) <year>  <name of author>\n\n    This program is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    This program is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with this program.  If not, see <https://www.gnu.org/licenses/>.\n\nAlso add information on how to contact you by electronic and paper mail.\n\n  If the program does terminal interaction, make it output a short\nnotice like this when it starts in an interactive mode:\n\n    <program>  Copyright (C) <year>  <name of author>\n    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.\n    This is free software, and you are welcome to redistribute it\n    under certain conditions; type `show c' for details.\n\nThe hypothetical commands `show w' and `show c' should show the appropriate\nparts of the General Public License.  Of course, your program's commands\nmight be different; for a GUI interface, you would use an \"about box\".\n\n  You should also get your employer (if you work as a programmer) or school,\nif any, to sign a \"copyright disclaimer\" for the program, if necessary.\nFor more information on this, and how to apply and follow the GNU GPL, see\n<https://www.gnu.org/licenses/>.\n\n  The GNU General Public License does not permit incorporating your program\ninto proprietary programs.  If your program is a subroutine library, you\nmay consider it more useful to permit linking proprietary applications with\nthe library.  If this is what you want to do, use the GNU Lesser General\nPublic License instead of this License.  But first, please read\n<https://www.gnu.org/licenses/why-not-lgpl.html>.\n"
  },
  {
    "path": "ThirdParty/ffmpeg/README.txt",
    "content": "Zeranoe FFmpeg Builds <http://ffmpeg.zeranoe.com/builds/>\n\nBuild: ffmpeg-4.0.2-win32-shared\n\nConfiguration:\n  --disable-static\n  --enable-shared\n  --enable-gpl\n  --enable-version3\n  --enable-sdl2\n  --enable-bzlib\n  --enable-fontconfig\n  --enable-gnutls\n  --enable-iconv\n  --enable-libass\n  --enable-libbluray\n  --enable-libfreetype\n  --enable-libmp3lame\n  --enable-libopencore-amrnb\n  --enable-libopencore-amrwb\n  --enable-libopenjpeg\n  --enable-libopus\n  --enable-libshine\n  --enable-libsnappy\n  --enable-libsoxr\n  --enable-libtheora\n  --enable-libtwolame\n  --enable-libvpx\n  --enable-libwavpack\n  --enable-libwebp\n  --enable-libx264\n  --enable-libx265\n  --enable-libxml2\n  --enable-libzimg\n  --enable-lzma\n  --enable-zlib\n  --enable-gmp\n  --enable-libvidstab\n  --enable-libvorbis\n  --enable-libvo-amrwbenc\n  --enable-libmysofa\n  --enable-libspeex\n  --enable-libxvid\n  --enable-libaom\n  --enable-libmfx\n  --enable-amf\n  --enable-ffnvcodec\n  --enable-cuvid\n  --enable-d3d11va\n  --enable-nvenc\n  --enable-nvdec\n  --enable-dxva2\n  --enable-avisynth\n\nLibraries:\n  SDL               2.0.8             <https://libsdl.org>\n  bzip2             1.0.6             <http://bzip.org/>\n  Fontconfig        2.13.0            <http://freedesktop.org/wiki/Software/fontconfig>\n  GnuTLS            3.5.19            <https://gnutls.org/>\n  libiconv          1.15              <http://gnu.org/software/libiconv>\n  libass            0.14.0            <https://github.com/libass/libass>\n  libbluray         20180309-8c15fda  <https://www.videolan.org/developers/libbluray.html>\n  FreeType          2.9.1             <http://freetype.sourceforge.net>\n  LAME              3.100             <http://lame.sourceforge.net>\n  OpenCORE AMR      20170731-07a5be4  <https://sourceforge.net/projects/opencore-amr>\n  OpenJPEG          2.3.0             <https://github.com/uclouvain/openjpeg>\n  Opus              20180614-c1c247d  <https://opus-codec.org>\n  shine             3.1.1             <https://github.com/savonet/shine>\n  Snappy            1.1.7             <https://github.com/google/snappy>\n  libsoxr           20180224-945b592  <http://sourceforge.net/projects/soxr>\n  Theora            1.1.1             <http://theora.org>\n  TwoLAME           0.3.13            <http://twolame.org>\n  vpx               1.7.0             <http://webmproject.org>\n  WavPack           5.1.0             <http://wavpack.com>\n  WebP              1.0.0             <https://developers.google.com/speed/webp>\n  x264              20180118-7d0ff22  <https://www.videolan.org/developers/x264.html>\n  x265              20180725-79c76e4  <https://bitbucket.org/multicoreware/x265/wiki/Home>\n  libxml2           2.9.8             <http://xmlsoft.org>\n  z.lib             20180713-654c15b  <https://github.com/sekrit-twc/zimg>\n  XZ Utils          5.2.4             <http://tukaani.org/xz>\n  zlib              1.2.11            <http://zlib.net>\n  GMP               6.1.2             <https://gmplib.org>\n  vid.stab          20180529-38ecbaf  <http://public.hronopik.de/vid.stab>\n  Vorbis            1.3.5             <http://vorbis.com>\n  VisualOn AMR-WB   20141107-3b3fcd0  <https://sourceforge.net/projects/opencore-amr>\n  libmysofa         20171120-cec6eea  <https://github.com/hoene/libmysofa>\n  Speex             1.2.0             <http://speex.org>\n  Xvid              1.3.5             <https://labs.xvid.com>\n  aom               20180725-b0c13b2  <https://aomedia.googlesource.com/aom>\n  libmfx            1.23              <https://software.intel.com/en-us/media-sdk>\n  AMF               20171219-801247d  <https://gpuopen.com/gaming-product/advanced-media-framework>\n  nv-codec-headers  20180507-91d9e20  <https://git.videolan.org/?p=ffmpeg/nv-codec-headers.git>\n\nCopyright (C) 2018 Kyle Schwarz\n\nThis program is free software: you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program.  If not, see <http://www.gnu.org/licenses/>.\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/Makefile",
    "content": "# use pkg-config for getting CFLAGS and LDLIBS\nFFMPEG_LIBS=    libavdevice                        \\\n                libavformat                        \\\n                libavfilter                        \\\n                libavcodec                         \\\n                libswresample                      \\\n                libswscale                         \\\n                libavutil                          \\\n\nCFLAGS += -Wall -g\nCFLAGS := $(shell pkg-config --cflags $(FFMPEG_LIBS)) $(CFLAGS)\nLDLIBS := $(shell pkg-config --libs $(FFMPEG_LIBS)) $(LDLIBS)\n\nEXAMPLES=       avio_dir_cmd                       \\\n                avio_reading                       \\\n                decode_audio                       \\\n                decode_video                       \\\n                demuxing_decoding                  \\\n                encode_audio                       \\\n                encode_video                       \\\n                extract_mvs                        \\\n                filtering_video                    \\\n                filtering_audio                    \\\n                http_multiclient                   \\\n                hw_decode                          \\\n                metadata                           \\\n                muxing                             \\\n                remuxing                           \\\n                resampling_audio                   \\\n                scaling_video                      \\\n                transcode_aac                      \\\n                transcoding                        \\\n\nOBJS=$(addsuffix .o,$(EXAMPLES))\n\n# the following examples make explicit use of the math library\navcodec:           LDLIBS += -lm\nencode_audio:      LDLIBS += -lm\nmuxing:            LDLIBS += -lm\nresampling_audio:  LDLIBS += -lm\n\n.phony: all clean-test clean\n\nall: $(OBJS) $(EXAMPLES)\n\nclean-test:\n\t$(RM) test*.pgm test.h264 test.mp2 test.sw test.mpg\n\nclean: clean-test\n\t$(RM) $(EXAMPLES) $(OBJS)\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/README",
    "content": "FFmpeg examples README\n----------------------\n\nBoth following use cases rely on pkg-config and make, thus make sure\nthat you have them installed and working on your system.\n\n\nMethod 1: build the installed examples in a generic read/write user directory\n\nCopy to a read/write user directory and just use \"make\", it will link\nto the libraries on your system, assuming the PKG_CONFIG_PATH is\ncorrectly configured.\n\nMethod 2: build the examples in-tree\n\nAssuming you are in the source FFmpeg checkout directory, you need to build\nFFmpeg (no need to make install in any prefix). Then just run \"make examples\".\nThis will build the examples using the FFmpeg build system. You can clean those\nexamples using \"make examplesclean\"\n\nIf you want to try the dedicated Makefile examples (to emulate the first\nmethod), go into doc/examples and run a command such as\nPKG_CONFIG_PATH=pc-uninstalled make.\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/avio_dir_cmd.c",
    "content": "/*\n * Copyright (c) 2014 Lukasz Marek\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include <libavcodec/avcodec.h>\n#include <libavformat/avformat.h>\n#include <libavformat/avio.h>\n\nstatic const char *type_string(int type)\n{\n    switch (type) {\n    case AVIO_ENTRY_DIRECTORY:\n        return \"<DIR>\";\n    case AVIO_ENTRY_FILE:\n        return \"<FILE>\";\n    case AVIO_ENTRY_BLOCK_DEVICE:\n        return \"<BLOCK DEVICE>\";\n    case AVIO_ENTRY_CHARACTER_DEVICE:\n        return \"<CHARACTER DEVICE>\";\n    case AVIO_ENTRY_NAMED_PIPE:\n        return \"<PIPE>\";\n    case AVIO_ENTRY_SYMBOLIC_LINK:\n        return \"<LINK>\";\n    case AVIO_ENTRY_SOCKET:\n        return \"<SOCKET>\";\n    case AVIO_ENTRY_SERVER:\n        return \"<SERVER>\";\n    case AVIO_ENTRY_SHARE:\n        return \"<SHARE>\";\n    case AVIO_ENTRY_WORKGROUP:\n        return \"<WORKGROUP>\";\n    case AVIO_ENTRY_UNKNOWN:\n    default:\n        break;\n    }\n    return \"<UNKNOWN>\";\n}\n\nstatic int list_op(const char *input_dir)\n{\n    AVIODirEntry *entry = NULL;\n    AVIODirContext *ctx = NULL;\n    int cnt, ret;\n    char filemode[4], uid_and_gid[20];\n\n    if ((ret = avio_open_dir(&ctx, input_dir, NULL)) < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot open directory: %s.\\n\", av_err2str(ret));\n        goto fail;\n    }\n\n    cnt = 0;\n    for (;;) {\n        if ((ret = avio_read_dir(ctx, &entry)) < 0) {\n            av_log(NULL, AV_LOG_ERROR, \"Cannot list directory: %s.\\n\", av_err2str(ret));\n            goto fail;\n        }\n        if (!entry)\n            break;\n        if (entry->filemode == -1) {\n            snprintf(filemode, 4, \"???\");\n        } else {\n            snprintf(filemode, 4, \"%3\"PRIo64, entry->filemode);\n        }\n        snprintf(uid_and_gid, 20, \"%\"PRId64\"(%\"PRId64\")\", entry->user_id, entry->group_id);\n        if (cnt == 0)\n            av_log(NULL, AV_LOG_INFO, \"%-9s %12s %30s %10s %s %16s %16s %16s\\n\",\n                   \"TYPE\", \"SIZE\", \"NAME\", \"UID(GID)\", \"UGO\", \"MODIFIED\",\n                   \"ACCESSED\", \"STATUS_CHANGED\");\n        av_log(NULL, AV_LOG_INFO, \"%-9s %12\"PRId64\" %30s %10s %s %16\"PRId64\" %16\"PRId64\" %16\"PRId64\"\\n\",\n               type_string(entry->type),\n               entry->size,\n               entry->name,\n               uid_and_gid,\n               filemode,\n               entry->modification_timestamp,\n               entry->access_timestamp,\n               entry->status_change_timestamp);\n        avio_free_directory_entry(&entry);\n        cnt++;\n    };\n\n  fail:\n    avio_close_dir(&ctx);\n    return ret;\n}\n\nstatic int del_op(const char *url)\n{\n    int ret = avpriv_io_delete(url);\n    if (ret < 0)\n        av_log(NULL, AV_LOG_ERROR, \"Cannot delete '%s': %s.\\n\", url, av_err2str(ret));\n    return ret;\n}\n\nstatic int move_op(const char *src, const char *dst)\n{\n    int ret = avpriv_io_move(src, dst);\n    if (ret < 0)\n        av_log(NULL, AV_LOG_ERROR, \"Cannot move '%s' into '%s': %s.\\n\", src, dst, av_err2str(ret));\n    return ret;\n}\n\n\nstatic void usage(const char *program_name)\n{\n    fprintf(stderr, \"usage: %s OPERATION entry1 [entry2]\\n\"\n            \"API example program to show how to manipulate resources \"\n            \"accessed through AVIOContext.\\n\"\n            \"OPERATIONS:\\n\"\n            \"list      list content of the directory\\n\"\n            \"move      rename content in directory\\n\"\n            \"del       delete content in directory\\n\",\n            program_name);\n}\n\nint main(int argc, char *argv[])\n{\n    const char *op = NULL;\n    int ret;\n\n    av_log_set_level(AV_LOG_DEBUG);\n\n    if (argc < 2) {\n        usage(argv[0]);\n        return 1;\n    }\n\n    avformat_network_init();\n\n    op = argv[1];\n    if (strcmp(op, \"list\") == 0) {\n        if (argc < 3) {\n            av_log(NULL, AV_LOG_INFO, \"Missing argument for list operation.\\n\");\n            ret = AVERROR(EINVAL);\n        } else {\n            ret = list_op(argv[2]);\n        }\n    } else if (strcmp(op, \"del\") == 0) {\n        if (argc < 3) {\n            av_log(NULL, AV_LOG_INFO, \"Missing argument for del operation.\\n\");\n            ret = AVERROR(EINVAL);\n        } else {\n            ret = del_op(argv[2]);\n        }\n    } else if (strcmp(op, \"move\") == 0) {\n        if (argc < 4) {\n            av_log(NULL, AV_LOG_INFO, \"Missing argument for move operation.\\n\");\n            ret = AVERROR(EINVAL);\n        } else {\n            ret = move_op(argv[2], argv[3]);\n        }\n    } else {\n        av_log(NULL, AV_LOG_INFO, \"Invalid operation %s\\n\", op);\n        ret = AVERROR(EINVAL);\n    }\n\n    avformat_network_deinit();\n\n    return ret < 0 ? 1 : 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/avio_reading.c",
    "content": "/*\n * Copyright (c) 2014 Stefano Sabatini\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @file\n * libavformat AVIOContext API example.\n *\n * Make libavformat demuxer access media content through a custom\n * AVIOContext read callback.\n * @example avio_reading.c\n */\n\n#include <libavcodec/avcodec.h>\n#include <libavformat/avformat.h>\n#include <libavformat/avio.h>\n#include <libavutil/file.h>\n\nstruct buffer_data {\n    uint8_t *ptr;\n    size_t size; ///< size left in the buffer\n};\n\nstatic int read_packet(void *opaque, uint8_t *buf, int buf_size)\n{\n    struct buffer_data *bd = (struct buffer_data *)opaque;\n    buf_size = FFMIN(buf_size, bd->size);\n\n    if (!buf_size)\n        return AVERROR_EOF;\n    printf(\"ptr:%p size:%zu\\n\", bd->ptr, bd->size);\n\n    /* copy internal buffer data to buf */\n    memcpy(buf, bd->ptr, buf_size);\n    bd->ptr  += buf_size;\n    bd->size -= buf_size;\n\n    return buf_size;\n}\n\nint main(int argc, char *argv[])\n{\n    AVFormatContext *fmt_ctx = NULL;\n    AVIOContext *avio_ctx = NULL;\n    uint8_t *buffer = NULL, *avio_ctx_buffer = NULL;\n    size_t buffer_size, avio_ctx_buffer_size = 4096;\n    char *input_filename = NULL;\n    int ret = 0;\n    struct buffer_data bd = { 0 };\n\n    if (argc != 2) {\n        fprintf(stderr, \"usage: %s input_file\\n\"\n                \"API example program to show how to read from a custom buffer \"\n                \"accessed through AVIOContext.\\n\", argv[0]);\n        return 1;\n    }\n    input_filename = argv[1];\n\n    /* slurp file content into buffer */\n    ret = av_file_map(input_filename, &buffer, &buffer_size, 0, NULL);\n    if (ret < 0)\n        goto end;\n\n    /* fill opaque structure used by the AVIOContext read callback */\n    bd.ptr  = buffer;\n    bd.size = buffer_size;\n\n    if (!(fmt_ctx = avformat_alloc_context())) {\n        ret = AVERROR(ENOMEM);\n        goto end;\n    }\n\n    avio_ctx_buffer = av_malloc(avio_ctx_buffer_size);\n    if (!avio_ctx_buffer) {\n        ret = AVERROR(ENOMEM);\n        goto end;\n    }\n    avio_ctx = avio_alloc_context(avio_ctx_buffer, avio_ctx_buffer_size,\n                                  0, &bd, &read_packet, NULL, NULL);\n    if (!avio_ctx) {\n        ret = AVERROR(ENOMEM);\n        goto end;\n    }\n    fmt_ctx->pb = avio_ctx;\n\n    ret = avformat_open_input(&fmt_ctx, NULL, NULL, NULL);\n    if (ret < 0) {\n        fprintf(stderr, \"Could not open input\\n\");\n        goto end;\n    }\n\n    ret = avformat_find_stream_info(fmt_ctx, NULL);\n    if (ret < 0) {\n        fprintf(stderr, \"Could not find stream information\\n\");\n        goto end;\n    }\n\n    av_dump_format(fmt_ctx, 0, input_filename, 0);\n\nend:\n    avformat_close_input(&fmt_ctx);\n    /* note: the internal buffer could have changed, and be != avio_ctx_buffer */\n    if (avio_ctx) {\n        av_freep(&avio_ctx->buffer);\n        av_freep(&avio_ctx);\n    }\n    av_file_unmap(buffer, buffer_size);\n\n    if (ret < 0) {\n        fprintf(stderr, \"Error occurred: %s\\n\", av_err2str(ret));\n        return 1;\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/decode_audio.c",
    "content": "/*\n * Copyright (c) 2001 Fabrice Bellard\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @file\n * audio decoding with libavcodec API example\n *\n * @example decode_audio.c\n */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#include <libavutil/frame.h>\n#include <libavutil/mem.h>\n\n#include <libavcodec/avcodec.h>\n\n#define AUDIO_INBUF_SIZE 20480\n#define AUDIO_REFILL_THRESH 4096\n\nstatic void decode(AVCodecContext *dec_ctx, AVPacket *pkt, AVFrame *frame,\n                   FILE *outfile)\n{\n    int i, ch;\n    int ret, data_size;\n\n    /* send the packet with the compressed data to the decoder */\n    ret = avcodec_send_packet(dec_ctx, pkt);\n    if (ret < 0) {\n        fprintf(stderr, \"Error submitting the packet to the decoder\\n\");\n        exit(1);\n    }\n\n    /* read all the output frames (in general there may be any number of them */\n    while (ret >= 0) {\n        ret = avcodec_receive_frame(dec_ctx, frame);\n        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)\n            return;\n        else if (ret < 0) {\n            fprintf(stderr, \"Error during decoding\\n\");\n            exit(1);\n        }\n        data_size = av_get_bytes_per_sample(dec_ctx->sample_fmt);\n        if (data_size < 0) {\n            /* This should not occur, checking just for paranoia */\n            fprintf(stderr, \"Failed to calculate data size\\n\");\n            exit(1);\n        }\n        for (i = 0; i < frame->nb_samples; i++)\n            for (ch = 0; ch < dec_ctx->channels; ch++)\n                fwrite(frame->data[ch] + data_size*i, 1, data_size, outfile);\n    }\n}\n\nint main(int argc, char **argv)\n{\n    const char *outfilename, *filename;\n    const AVCodec *codec;\n    AVCodecContext *c= NULL;\n    AVCodecParserContext *parser = NULL;\n    int len, ret;\n    FILE *f, *outfile;\n    uint8_t inbuf[AUDIO_INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE];\n    uint8_t *data;\n    size_t   data_size;\n    AVPacket *pkt;\n    AVFrame *decoded_frame = NULL;\n\n    if (argc <= 2) {\n        fprintf(stderr, \"Usage: %s <input file> <output file>\\n\", argv[0]);\n        exit(0);\n    }\n    filename    = argv[1];\n    outfilename = argv[2];\n\n    pkt = av_packet_alloc();\n\n    /* find the MPEG audio decoder */\n    codec = avcodec_find_decoder(AV_CODEC_ID_MP2);\n    if (!codec) {\n        fprintf(stderr, \"Codec not found\\n\");\n        exit(1);\n    }\n\n    parser = av_parser_init(codec->id);\n    if (!parser) {\n        fprintf(stderr, \"Parser not found\\n\");\n        exit(1);\n    }\n\n    c = avcodec_alloc_context3(codec);\n    if (!c) {\n        fprintf(stderr, \"Could not allocate audio codec context\\n\");\n        exit(1);\n    }\n\n    /* open it */\n    if (avcodec_open2(c, codec, NULL) < 0) {\n        fprintf(stderr, \"Could not open codec\\n\");\n        exit(1);\n    }\n\n    f = fopen(filename, \"rb\");\n    if (!f) {\n        fprintf(stderr, \"Could not open %s\\n\", filename);\n        exit(1);\n    }\n    outfile = fopen(outfilename, \"wb\");\n    if (!outfile) {\n        av_free(c);\n        exit(1);\n    }\n\n    /* decode until eof */\n    data      = inbuf;\n    data_size = fread(inbuf, 1, AUDIO_INBUF_SIZE, f);\n\n    while (data_size > 0) {\n        if (!decoded_frame) {\n            if (!(decoded_frame = av_frame_alloc())) {\n                fprintf(stderr, \"Could not allocate audio frame\\n\");\n                exit(1);\n            }\n        }\n\n        ret = av_parser_parse2(parser, c, &pkt->data, &pkt->size,\n                               data, data_size,\n                               AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);\n        if (ret < 0) {\n            fprintf(stderr, \"Error while parsing\\n\");\n            exit(1);\n        }\n        data      += ret;\n        data_size -= ret;\n\n        if (pkt->size)\n            decode(c, pkt, decoded_frame, outfile);\n\n        if (data_size < AUDIO_REFILL_THRESH) {\n            memmove(inbuf, data, data_size);\n            data = inbuf;\n            len = fread(data + data_size, 1,\n                        AUDIO_INBUF_SIZE - data_size, f);\n            if (len > 0)\n                data_size += len;\n        }\n    }\n\n    /* flush the decoder */\n    pkt->data = NULL;\n    pkt->size = 0;\n    decode(c, pkt, decoded_frame, outfile);\n\n    fclose(outfile);\n    fclose(f);\n\n    avcodec_free_context(&c);\n    av_parser_close(parser);\n    av_frame_free(&decoded_frame);\n    av_packet_free(&pkt);\n\n    return 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/decode_video.c",
    "content": "/*\n * Copyright (c) 2001 Fabrice Bellard\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @file\n * video decoding with libavcodec API example\n *\n * @example decode_video.c\n */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#include <libavcodec/avcodec.h>\n\n#define INBUF_SIZE 4096\n\nstatic void pgm_save(unsigned char *buf, int wrap, int xsize, int ysize,\n                     char *filename)\n{\n    FILE *f;\n    int i;\n\n    f = fopen(filename,\"w\");\n    fprintf(f, \"P5\\n%d %d\\n%d\\n\", xsize, ysize, 255);\n    for (i = 0; i < ysize; i++)\n        fwrite(buf + i * wrap, 1, xsize, f);\n    fclose(f);\n}\n\nstatic void decode(AVCodecContext *dec_ctx, AVFrame *frame, AVPacket *pkt,\n                   const char *filename)\n{\n    char buf[1024];\n    int ret;\n\n    ret = avcodec_send_packet(dec_ctx, pkt);\n    if (ret < 0) {\n        fprintf(stderr, \"Error sending a packet for decoding\\n\");\n        exit(1);\n    }\n\n    while (ret >= 0) {\n        ret = avcodec_receive_frame(dec_ctx, frame);\n        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)\n            return;\n        else if (ret < 0) {\n            fprintf(stderr, \"Error during decoding\\n\");\n            exit(1);\n        }\n\n        printf(\"saving frame %3d\\n\", dec_ctx->frame_number);\n        fflush(stdout);\n\n        /* the picture is allocated by the decoder. no need to\n           free it */\n        snprintf(buf, sizeof(buf), \"%s-%d\", filename, dec_ctx->frame_number);\n        pgm_save(frame->data[0], frame->linesize[0],\n                 frame->width, frame->height, buf);\n    }\n}\n\nint main(int argc, char **argv)\n{\n    const char *filename, *outfilename;\n    const AVCodec *codec;\n    AVCodecParserContext *parser;\n    AVCodecContext *c= NULL;\n    FILE *f;\n    AVFrame *frame;\n    uint8_t inbuf[INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE];\n    uint8_t *data;\n    size_t   data_size;\n    int ret;\n    AVPacket *pkt;\n\n    if (argc <= 2) {\n        fprintf(stderr, \"Usage: %s <input file> <output file>\\n\", argv[0]);\n        exit(0);\n    }\n    filename    = argv[1];\n    outfilename = argv[2];\n\n    pkt = av_packet_alloc();\n    if (!pkt)\n        exit(1);\n\n    /* set end of buffer to 0 (this ensures that no overreading happens for damaged MPEG streams) */\n    memset(inbuf + INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE);\n\n    /* find the MPEG-1 video decoder */\n    codec = avcodec_find_decoder(AV_CODEC_ID_MPEG1VIDEO);\n    if (!codec) {\n        fprintf(stderr, \"Codec not found\\n\");\n        exit(1);\n    }\n\n    parser = av_parser_init(codec->id);\n    if (!parser) {\n        fprintf(stderr, \"parser not found\\n\");\n        exit(1);\n    }\n\n    c = avcodec_alloc_context3(codec);\n    if (!c) {\n        fprintf(stderr, \"Could not allocate video codec context\\n\");\n        exit(1);\n    }\n\n    /* For some codecs, such as msmpeg4 and mpeg4, width and height\n       MUST be initialized there because this information is not\n       available in the bitstream. */\n\n    /* open it */\n    if (avcodec_open2(c, codec, NULL) < 0) {\n        fprintf(stderr, \"Could not open codec\\n\");\n        exit(1);\n    }\n\n    f = fopen(filename, \"rb\");\n    if (!f) {\n        fprintf(stderr, \"Could not open %s\\n\", filename);\n        exit(1);\n    }\n\n    frame = av_frame_alloc();\n    if (!frame) {\n        fprintf(stderr, \"Could not allocate video frame\\n\");\n        exit(1);\n    }\n\n    while (!feof(f)) {\n        /* read raw data from the input file */\n        data_size = fread(inbuf, 1, INBUF_SIZE, f);\n        if (!data_size)\n            break;\n\n        /* use the parser to split the data into frames */\n        data = inbuf;\n        while (data_size > 0) {\n            ret = av_parser_parse2(parser, c, &pkt->data, &pkt->size,\n                                   data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);\n            if (ret < 0) {\n                fprintf(stderr, \"Error while parsing\\n\");\n                exit(1);\n            }\n            data      += ret;\n            data_size -= ret;\n\n            if (pkt->size)\n                decode(c, frame, pkt, outfilename);\n        }\n    }\n\n    /* flush the decoder */\n    decode(c, frame, NULL, outfilename);\n\n    fclose(f);\n\n    av_parser_close(parser);\n    avcodec_free_context(&c);\n    av_frame_free(&frame);\n    av_packet_free(&pkt);\n\n    return 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/demuxing_decoding.c",
    "content": "/*\n * Copyright (c) 2012 Stefano Sabatini\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @file\n * Demuxing and decoding example.\n *\n * Show how to use the libavformat and libavcodec API to demux and\n * decode audio and video data.\n * @example demuxing_decoding.c\n */\n\n#include <libavutil/imgutils.h>\n#include <libavutil/samplefmt.h>\n#include <libavutil/timestamp.h>\n#include <libavformat/avformat.h>\n\nstatic AVFormatContext *fmt_ctx = NULL;\nstatic AVCodecContext *video_dec_ctx = NULL, *audio_dec_ctx;\nstatic int width, height;\nstatic enum AVPixelFormat pix_fmt;\nstatic AVStream *video_stream = NULL, *audio_stream = NULL;\nstatic const char *src_filename = NULL;\nstatic const char *video_dst_filename = NULL;\nstatic const char *audio_dst_filename = NULL;\nstatic FILE *video_dst_file = NULL;\nstatic FILE *audio_dst_file = NULL;\n\nstatic uint8_t *video_dst_data[4] = {NULL};\nstatic int      video_dst_linesize[4];\nstatic int video_dst_bufsize;\n\nstatic int video_stream_idx = -1, audio_stream_idx = -1;\nstatic AVFrame *frame = NULL;\nstatic AVPacket pkt;\nstatic int video_frame_count = 0;\nstatic int audio_frame_count = 0;\n\n/* Enable or disable frame reference counting. You are not supposed to support\n * both paths in your application but pick the one most appropriate to your\n * needs. Look for the use of refcount in this example to see what are the\n * differences of API usage between them. */\nstatic int refcount = 0;\n\nstatic int decode_packet(int *got_frame, int cached)\n{\n    int ret = 0;\n    int decoded = pkt.size;\n\n    *got_frame = 0;\n\n    if (pkt.stream_index == video_stream_idx) {\n        /* decode video frame */\n        ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);\n        if (ret < 0) {\n            fprintf(stderr, \"Error decoding video frame (%s)\\n\", av_err2str(ret));\n            return ret;\n        }\n\n        if (*got_frame) {\n\n            if (frame->width != width || frame->height != height ||\n                frame->format != pix_fmt) {\n                /* To handle this change, one could call av_image_alloc again and\n                 * decode the following frames into another rawvideo file. */\n                fprintf(stderr, \"Error: Width, height and pixel format have to be \"\n                        \"constant in a rawvideo file, but the width, height or \"\n                        \"pixel format of the input video changed:\\n\"\n                        \"old: width = %d, height = %d, format = %s\\n\"\n                        \"new: width = %d, height = %d, format = %s\\n\",\n                        width, height, av_get_pix_fmt_name(pix_fmt),\n                        frame->width, frame->height,\n                        av_get_pix_fmt_name(frame->format));\n                return -1;\n            }\n\n            printf(\"video_frame%s n:%d coded_n:%d\\n\",\n                   cached ? \"(cached)\" : \"\",\n                   video_frame_count++, frame->coded_picture_number);\n\n            /* copy decoded frame to destination buffer:\n             * this is required since rawvideo expects non aligned data */\n            av_image_copy(video_dst_data, video_dst_linesize,\n                          (const uint8_t **)(frame->data), frame->linesize,\n                          pix_fmt, width, height);\n\n            /* write to rawvideo file */\n            fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file);\n        }\n    } else if (pkt.stream_index == audio_stream_idx) {\n        /* decode audio frame */\n        ret = avcodec_decode_audio4(audio_dec_ctx, frame, got_frame, &pkt);\n        if (ret < 0) {\n            fprintf(stderr, \"Error decoding audio frame (%s)\\n\", av_err2str(ret));\n            return ret;\n        }\n        /* Some audio decoders decode only part of the packet, and have to be\n         * called again with the remainder of the packet data.\n         * Sample: fate-suite/lossless-audio/luckynight-partial.shn\n         * Also, some decoders might over-read the packet. */\n        decoded = FFMIN(ret, pkt.size);\n\n        if (*got_frame) {\n            size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);\n            printf(\"audio_frame%s n:%d nb_samples:%d pts:%s\\n\",\n                   cached ? \"(cached)\" : \"\",\n                   audio_frame_count++, frame->nb_samples,\n                   av_ts2timestr(frame->pts, &audio_dec_ctx->time_base));\n\n            /* Write the raw audio data samples of the first plane. This works\n             * fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,\n             * most audio decoders output planar audio, which uses a separate\n             * plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).\n             * In other words, this code will write only the first audio channel\n             * in these cases.\n             * You should use libswresample or libavfilter to convert the frame\n             * to packed data. */\n            fwrite(frame->extended_data[0], 1, unpadded_linesize, audio_dst_file);\n        }\n    }\n\n    /* If we use frame reference counting, we own the data and need\n     * to de-reference it when we don't use it anymore */\n    if (*got_frame && refcount)\n        av_frame_unref(frame);\n\n    return decoded;\n}\n\nstatic int open_codec_context(int *stream_idx,\n                              AVCodecContext **dec_ctx, AVFormatContext *fmt_ctx, enum AVMediaType type)\n{\n    int ret, stream_index;\n    AVStream *st;\n    AVCodec *dec = NULL;\n    AVDictionary *opts = NULL;\n\n    ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);\n    if (ret < 0) {\n        fprintf(stderr, \"Could not find %s stream in input file '%s'\\n\",\n                av_get_media_type_string(type), src_filename);\n        return ret;\n    } else {\n        stream_index = ret;\n        st = fmt_ctx->streams[stream_index];\n\n        /* find decoder for the stream */\n        dec = avcodec_find_decoder(st->codecpar->codec_id);\n        if (!dec) {\n            fprintf(stderr, \"Failed to find %s codec\\n\",\n                    av_get_media_type_string(type));\n            return AVERROR(EINVAL);\n        }\n\n        /* Allocate a codec context for the decoder */\n        *dec_ctx = avcodec_alloc_context3(dec);\n        if (!*dec_ctx) {\n            fprintf(stderr, \"Failed to allocate the %s codec context\\n\",\n                    av_get_media_type_string(type));\n            return AVERROR(ENOMEM);\n        }\n\n        /* Copy codec parameters from input stream to output codec context */\n        if ((ret = avcodec_parameters_to_context(*dec_ctx, st->codecpar)) < 0) {\n            fprintf(stderr, \"Failed to copy %s codec parameters to decoder context\\n\",\n                    av_get_media_type_string(type));\n            return ret;\n        }\n\n        /* Init the decoders, with or without reference counting */\n        av_dict_set(&opts, \"refcounted_frames\", refcount ? \"1\" : \"0\", 0);\n        if ((ret = avcodec_open2(*dec_ctx, dec, &opts)) < 0) {\n            fprintf(stderr, \"Failed to open %s codec\\n\",\n                    av_get_media_type_string(type));\n            return ret;\n        }\n        *stream_idx = stream_index;\n    }\n\n    return 0;\n}\n\nstatic int get_format_from_sample_fmt(const char **fmt,\n                                      enum AVSampleFormat sample_fmt)\n{\n    int i;\n    struct sample_fmt_entry {\n        enum AVSampleFormat sample_fmt; const char *fmt_be, *fmt_le;\n    } sample_fmt_entries[] = {\n        { AV_SAMPLE_FMT_U8,  \"u8\",    \"u8\"    },\n        { AV_SAMPLE_FMT_S16, \"s16be\", \"s16le\" },\n        { AV_SAMPLE_FMT_S32, \"s32be\", \"s32le\" },\n        { AV_SAMPLE_FMT_FLT, \"f32be\", \"f32le\" },\n        { AV_SAMPLE_FMT_DBL, \"f64be\", \"f64le\" },\n    };\n    *fmt = NULL;\n\n    for (i = 0; i < FF_ARRAY_ELEMS(sample_fmt_entries); i++) {\n        struct sample_fmt_entry *entry = &sample_fmt_entries[i];\n        if (sample_fmt == entry->sample_fmt) {\n            *fmt = AV_NE(entry->fmt_be, entry->fmt_le);\n            return 0;\n        }\n    }\n\n    fprintf(stderr,\n            \"sample format %s is not supported as output format\\n\",\n            av_get_sample_fmt_name(sample_fmt));\n    return -1;\n}\n\nint main (int argc, char **argv)\n{\n    int ret = 0, got_frame;\n\n    if (argc != 4 && argc != 5) {\n        fprintf(stderr, \"usage: %s [-refcount] input_file video_output_file audio_output_file\\n\"\n                \"API example program to show how to read frames from an input file.\\n\"\n                \"This program reads frames from a file, decodes them, and writes decoded\\n\"\n                \"video frames to a rawvideo file named video_output_file, and decoded\\n\"\n                \"audio frames to a rawaudio file named audio_output_file.\\n\\n\"\n                \"If the -refcount option is specified, the program use the\\n\"\n                \"reference counting frame system which allows keeping a copy of\\n\"\n                \"the data for longer than one decode call.\\n\"\n                \"\\n\", argv[0]);\n        exit(1);\n    }\n    if (argc == 5 && !strcmp(argv[1], \"-refcount\")) {\n        refcount = 1;\n        argv++;\n    }\n    src_filename = argv[1];\n    video_dst_filename = argv[2];\n    audio_dst_filename = argv[3];\n\n    /* open input file, and allocate format context */\n    if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {\n        fprintf(stderr, \"Could not open source file %s\\n\", src_filename);\n        exit(1);\n    }\n\n    /* retrieve stream information */\n    if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {\n        fprintf(stderr, \"Could not find stream information\\n\");\n        exit(1);\n    }\n\n    if (open_codec_context(&video_stream_idx, &video_dec_ctx, fmt_ctx, AVMEDIA_TYPE_VIDEO) >= 0) {\n        video_stream = fmt_ctx->streams[video_stream_idx];\n\n        video_dst_file = fopen(video_dst_filename, \"wb\");\n        if (!video_dst_file) {\n            fprintf(stderr, \"Could not open destination file %s\\n\", video_dst_filename);\n            ret = 1;\n            goto end;\n        }\n\n        /* allocate image where the decoded image will be put */\n        width = video_dec_ctx->width;\n        height = video_dec_ctx->height;\n        pix_fmt = video_dec_ctx->pix_fmt;\n        ret = av_image_alloc(video_dst_data, video_dst_linesize,\n                             width, height, pix_fmt, 1);\n        if (ret < 0) {\n            fprintf(stderr, \"Could not allocate raw video buffer\\n\");\n            goto end;\n        }\n        video_dst_bufsize = ret;\n    }\n\n    if (open_codec_context(&audio_stream_idx, &audio_dec_ctx, fmt_ctx, AVMEDIA_TYPE_AUDIO) >= 0) {\n        audio_stream = fmt_ctx->streams[audio_stream_idx];\n        audio_dst_file = fopen(audio_dst_filename, \"wb\");\n        if (!audio_dst_file) {\n            fprintf(stderr, \"Could not open destination file %s\\n\", audio_dst_filename);\n            ret = 1;\n            goto end;\n        }\n    }\n\n    /* dump input information to stderr */\n    av_dump_format(fmt_ctx, 0, src_filename, 0);\n\n    if (!audio_stream && !video_stream) {\n        fprintf(stderr, \"Could not find audio or video stream in the input, aborting\\n\");\n        ret = 1;\n        goto end;\n    }\n\n    frame = av_frame_alloc();\n    if (!frame) {\n        fprintf(stderr, \"Could not allocate frame\\n\");\n        ret = AVERROR(ENOMEM);\n        goto end;\n    }\n\n    /* initialize packet, set data to NULL, let the demuxer fill it */\n    av_init_packet(&pkt);\n    pkt.data = NULL;\n    pkt.size = 0;\n\n    if (video_stream)\n        printf(\"Demuxing video from file '%s' into '%s'\\n\", src_filename, video_dst_filename);\n    if (audio_stream)\n        printf(\"Demuxing audio from file '%s' into '%s'\\n\", src_filename, audio_dst_filename);\n\n    /* read frames from the file */\n    while (av_read_frame(fmt_ctx, &pkt) >= 0) {\n        AVPacket orig_pkt = pkt;\n        do {\n            ret = decode_packet(&got_frame, 0);\n            if (ret < 0)\n                break;\n            pkt.data += ret;\n            pkt.size -= ret;\n        } while (pkt.size > 0);\n        av_packet_unref(&orig_pkt);\n    }\n\n    /* flush cached frames */\n    pkt.data = NULL;\n    pkt.size = 0;\n    do {\n        decode_packet(&got_frame, 1);\n    } while (got_frame);\n\n    printf(\"Demuxing succeeded.\\n\");\n\n    if (video_stream) {\n        printf(\"Play the output video file with the command:\\n\"\n               \"ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\\n\",\n               av_get_pix_fmt_name(pix_fmt), width, height,\n               video_dst_filename);\n    }\n\n    if (audio_stream) {\n        enum AVSampleFormat sfmt = audio_dec_ctx->sample_fmt;\n        int n_channels = audio_dec_ctx->channels;\n        const char *fmt;\n\n        if (av_sample_fmt_is_planar(sfmt)) {\n            const char *packed = av_get_sample_fmt_name(sfmt);\n            printf(\"Warning: the sample format the decoder produced is planar \"\n                   \"(%s). This example will output the first channel only.\\n\",\n                   packed ? packed : \"?\");\n            sfmt = av_get_packed_sample_fmt(sfmt);\n            n_channels = 1;\n        }\n\n        if ((ret = get_format_from_sample_fmt(&fmt, sfmt)) < 0)\n            goto end;\n\n        printf(\"Play the output audio file with the command:\\n\"\n               \"ffplay -f %s -ac %d -ar %d %s\\n\",\n               fmt, n_channels, audio_dec_ctx->sample_rate,\n               audio_dst_filename);\n    }\n\nend:\n    avcodec_free_context(&video_dec_ctx);\n    avcodec_free_context(&audio_dec_ctx);\n    avformat_close_input(&fmt_ctx);\n    if (video_dst_file)\n        fclose(video_dst_file);\n    if (audio_dst_file)\n        fclose(audio_dst_file);\n    av_frame_free(&frame);\n    av_free(video_dst_data[0]);\n\n    return ret < 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/encode_audio.c",
    "content": "/*\n * Copyright (c) 2001 Fabrice Bellard\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @file\n * audio encoding with libavcodec API example.\n *\n * @example encode_audio.c\n */\n\n#include <stdint.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n#include <libavcodec/avcodec.h>\n\n#include <libavutil/channel_layout.h>\n#include <libavutil/common.h>\n#include <libavutil/frame.h>\n#include <libavutil/samplefmt.h>\n\n/* check that a given sample format is supported by the encoder */\nstatic int check_sample_fmt(const AVCodec *codec, enum AVSampleFormat sample_fmt)\n{\n    const enum AVSampleFormat *p = codec->sample_fmts;\n\n    while (*p != AV_SAMPLE_FMT_NONE) {\n        if (*p == sample_fmt)\n            return 1;\n        p++;\n    }\n    return 0;\n}\n\n/* just pick the highest supported samplerate */\nstatic int select_sample_rate(const AVCodec *codec)\n{\n    const int *p;\n    int best_samplerate = 0;\n\n    if (!codec->supported_samplerates)\n        return 44100;\n\n    p = codec->supported_samplerates;\n    while (*p) {\n        if (!best_samplerate || abs(44100 - *p) < abs(44100 - best_samplerate))\n            best_samplerate = *p;\n        p++;\n    }\n    return best_samplerate;\n}\n\n/* select layout with the highest channel count */\nstatic int select_channel_layout(const AVCodec *codec)\n{\n    const uint64_t *p;\n    uint64_t best_ch_layout = 0;\n    int best_nb_channels   = 0;\n\n    if (!codec->channel_layouts)\n        return AV_CH_LAYOUT_STEREO;\n\n    p = codec->channel_layouts;\n    while (*p) {\n        int nb_channels = av_get_channel_layout_nb_channels(*p);\n\n        if (nb_channels > best_nb_channels) {\n            best_ch_layout    = *p;\n            best_nb_channels = nb_channels;\n        }\n        p++;\n    }\n    return best_ch_layout;\n}\n\nstatic void encode(AVCodecContext *ctx, AVFrame *frame, AVPacket *pkt,\n                   FILE *output)\n{\n    int ret;\n\n    /* send the frame for encoding */\n    ret = avcodec_send_frame(ctx, frame);\n    if (ret < 0) {\n        fprintf(stderr, \"Error sending the frame to the encoder\\n\");\n        exit(1);\n    }\n\n    /* read all the available output packets (in general there may be any\n     * number of them */\n    while (ret >= 0) {\n        ret = avcodec_receive_packet(ctx, pkt);\n        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)\n            return;\n        else if (ret < 0) {\n            fprintf(stderr, \"Error encoding audio frame\\n\");\n            exit(1);\n        }\n\n        fwrite(pkt->data, 1, pkt->size, output);\n        av_packet_unref(pkt);\n    }\n}\n\nint main(int argc, char **argv)\n{\n    const char *filename;\n    const AVCodec *codec;\n    AVCodecContext *c= NULL;\n    AVFrame *frame;\n    AVPacket *pkt;\n    int i, j, k, ret;\n    FILE *f;\n    uint16_t *samples;\n    float t, tincr;\n\n    if (argc <= 1) {\n        fprintf(stderr, \"Usage: %s <output file>\\n\", argv[0]);\n        return 0;\n    }\n    filename = argv[1];\n\n    /* find the MP2 encoder */\n    codec = avcodec_find_encoder(AV_CODEC_ID_MP2);\n    if (!codec) {\n        fprintf(stderr, \"Codec not found\\n\");\n        exit(1);\n    }\n\n    c = avcodec_alloc_context3(codec);\n    if (!c) {\n        fprintf(stderr, \"Could not allocate audio codec context\\n\");\n        exit(1);\n    }\n\n    /* put sample parameters */\n    c->bit_rate = 64000;\n\n    /* check that the encoder supports s16 pcm input */\n    c->sample_fmt = AV_SAMPLE_FMT_S16;\n    if (!check_sample_fmt(codec, c->sample_fmt)) {\n        fprintf(stderr, \"Encoder does not support sample format %s\",\n                av_get_sample_fmt_name(c->sample_fmt));\n        exit(1);\n    }\n\n    /* select other audio parameters supported by the encoder */\n    c->sample_rate    = select_sample_rate(codec);\n    c->channel_layout = select_channel_layout(codec);\n    c->channels       = av_get_channel_layout_nb_channels(c->channel_layout);\n\n    /* open it */\n    if (avcodec_open2(c, codec, NULL) < 0) {\n        fprintf(stderr, \"Could not open codec\\n\");\n        exit(1);\n    }\n\n    f = fopen(filename, \"wb\");\n    if (!f) {\n        fprintf(stderr, \"Could not open %s\\n\", filename);\n        exit(1);\n    }\n\n    /* packet for holding encoded output */\n    pkt = av_packet_alloc();\n    if (!pkt) {\n        fprintf(stderr, \"could not allocate the packet\\n\");\n        exit(1);\n    }\n\n    /* frame containing input raw audio */\n    frame = av_frame_alloc();\n    if (!frame) {\n        fprintf(stderr, \"Could not allocate audio frame\\n\");\n        exit(1);\n    }\n\n    frame->nb_samples     = c->frame_size;\n    frame->format         = c->sample_fmt;\n    frame->channel_layout = c->channel_layout;\n\n    /* allocate the data buffers */\n    ret = av_frame_get_buffer(frame, 0);\n    if (ret < 0) {\n        fprintf(stderr, \"Could not allocate audio data buffers\\n\");\n        exit(1);\n    }\n\n    /* encode a single tone sound */\n    t = 0;\n    tincr = 2 * M_PI * 440.0 / c->sample_rate;\n    for (i = 0; i < 200; i++) {\n        /* make sure the frame is writable -- makes a copy if the encoder\n         * kept a reference internally */\n        ret = av_frame_make_writable(frame);\n        if (ret < 0)\n            exit(1);\n        samples = (uint16_t*)frame->data[0];\n\n        for (j = 0; j < c->frame_size; j++) {\n            samples[2*j] = (int)(sin(t) * 10000);\n\n            for (k = 1; k < c->channels; k++)\n                samples[2*j + k] = samples[2*j];\n            t += tincr;\n        }\n        encode(c, frame, pkt, f);\n    }\n\n    /* flush the encoder */\n    encode(c, NULL, pkt, f);\n\n    fclose(f);\n\n    av_frame_free(&frame);\n    av_packet_free(&pkt);\n    avcodec_free_context(&c);\n\n    return 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/encode_video.c",
    "content": "/*\n * Copyright (c) 2001 Fabrice Bellard\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @file\n * video encoding with libavcodec API example\n *\n * @example encode_video.c\n */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#include <libavcodec/avcodec.h>\n\n#include <libavutil/opt.h>\n#include <libavutil/imgutils.h>\n\nstatic void encode(AVCodecContext *enc_ctx, AVFrame *frame, AVPacket *pkt,\n                   FILE *outfile)\n{\n    int ret;\n\n    /* send the frame to the encoder */\n    if (frame)\n        printf(\"Send frame %3\"PRId64\"\\n\", frame->pts);\n\n    ret = avcodec_send_frame(enc_ctx, frame);\n    if (ret < 0) {\n        fprintf(stderr, \"Error sending a frame for encoding\\n\");\n        exit(1);\n    }\n\n    while (ret >= 0) {\n        ret = avcodec_receive_packet(enc_ctx, pkt);\n        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)\n            return;\n        else if (ret < 0) {\n            fprintf(stderr, \"Error during encoding\\n\");\n            exit(1);\n        }\n\n        printf(\"Write packet %3\"PRId64\" (size=%5d)\\n\", pkt->pts, pkt->size);\n        fwrite(pkt->data, 1, pkt->size, outfile);\n        av_packet_unref(pkt);\n    }\n}\n\nint main(int argc, char **argv)\n{\n    const char *filename, *codec_name;\n    const AVCodec *codec;\n    AVCodecContext *c= NULL;\n    int i, ret, x, y;\n    FILE *f;\n    AVFrame *frame;\n    AVPacket *pkt;\n    uint8_t endcode[] = { 0, 0, 1, 0xb7 };\n\n    if (argc <= 2) {\n        fprintf(stderr, \"Usage: %s <output file> <codec name>\\n\", argv[0]);\n        exit(0);\n    }\n    filename = argv[1];\n    codec_name = argv[2];\n\n    /* find the mpeg1video encoder */\n    codec = avcodec_find_encoder_by_name(codec_name);\n    if (!codec) {\n        fprintf(stderr, \"Codec '%s' not found\\n\", codec_name);\n        exit(1);\n    }\n\n    c = avcodec_alloc_context3(codec);\n    if (!c) {\n        fprintf(stderr, \"Could not allocate video codec context\\n\");\n        exit(1);\n    }\n\n    pkt = av_packet_alloc();\n    if (!pkt)\n        exit(1);\n\n    /* put sample parameters */\n    c->bit_rate = 400000;\n    /* resolution must be a multiple of two */\n    c->width = 352;\n    c->height = 288;\n    /* frames per second */\n    c->time_base = (AVRational){1, 25};\n    c->framerate = (AVRational){25, 1};\n\n    /* emit one intra frame every ten frames\n     * check frame pict_type before passing frame\n     * to encoder, if frame->pict_type is AV_PICTURE_TYPE_I\n     * then gop_size is ignored and the output of encoder\n     * will always be I frame irrespective to gop_size\n     */\n    c->gop_size = 10;\n    c->max_b_frames = 1;\n    c->pix_fmt = AV_PIX_FMT_YUV420P;\n\n    if (codec->id == AV_CODEC_ID_H264)\n        av_opt_set(c->priv_data, \"preset\", \"slow\", 0);\n\n    /* open it */\n    ret = avcodec_open2(c, codec, NULL);\n    if (ret < 0) {\n        fprintf(stderr, \"Could not open codec: %s\\n\", av_err2str(ret));\n        exit(1);\n    }\n\n    f = fopen(filename, \"wb\");\n    if (!f) {\n        fprintf(stderr, \"Could not open %s\\n\", filename);\n        exit(1);\n    }\n\n    frame = av_frame_alloc();\n    if (!frame) {\n        fprintf(stderr, \"Could not allocate video frame\\n\");\n        exit(1);\n    }\n    frame->format = c->pix_fmt;\n    frame->width  = c->width;\n    frame->height = c->height;\n\n    ret = av_frame_get_buffer(frame, 32);\n    if (ret < 0) {\n        fprintf(stderr, \"Could not allocate the video frame data\\n\");\n        exit(1);\n    }\n\n    /* encode 1 second of video */\n    for (i = 0; i < 25; i++) {\n        fflush(stdout);\n\n        /* make sure the frame data is writable */\n        ret = av_frame_make_writable(frame);\n        if (ret < 0)\n            exit(1);\n\n        /* prepare a dummy image */\n        /* Y */\n        for (y = 0; y < c->height; y++) {\n            for (x = 0; x < c->width; x++) {\n                frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3;\n            }\n        }\n\n        /* Cb and Cr */\n        for (y = 0; y < c->height/2; y++) {\n            for (x = 0; x < c->width/2; x++) {\n                frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2;\n                frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5;\n            }\n        }\n\n        frame->pts = i;\n\n        /* encode the image */\n        encode(c, frame, pkt, f);\n    }\n\n    /* flush the encoder */\n    encode(c, NULL, pkt, f);\n\n    /* add sequence end code to have a real MPEG file */\n    fwrite(endcode, 1, sizeof(endcode), f);\n    fclose(f);\n\n    avcodec_free_context(&c);\n    av_frame_free(&frame);\n    av_packet_free(&pkt);\n\n    return 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/extract_mvs.c",
    "content": "/*\n * Copyright (c) 2012 Stefano Sabatini\n * Copyright (c) 2014 Clément Bœsch\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include <libavutil/motion_vector.h>\n#include <libavformat/avformat.h>\n\nstatic AVFormatContext *fmt_ctx = NULL;\nstatic AVCodecContext *video_dec_ctx = NULL;\nstatic AVStream *video_stream = NULL;\nstatic const char *src_filename = NULL;\n\nstatic int video_stream_idx = -1;\nstatic AVFrame *frame = NULL;\nstatic int video_frame_count = 0;\n\nstatic int decode_packet(const AVPacket *pkt)\n{\n    int ret = avcodec_send_packet(video_dec_ctx, pkt);\n    if (ret < 0) {\n        fprintf(stderr, \"Error while sending a packet to the decoder: %s\\n\", av_err2str(ret));\n        return ret;\n    }\n\n    while (ret >= 0)  {\n        ret = avcodec_receive_frame(video_dec_ctx, frame);\n        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {\n            break;\n        } else if (ret < 0) {\n            fprintf(stderr, \"Error while receiving a frame from the decoder: %s\\n\", av_err2str(ret));\n            return ret;\n        }\n\n        if (ret >= 0) {\n            int i;\n            AVFrameSideData *sd;\n\n            video_frame_count++;\n            sd = av_frame_get_side_data(frame, AV_FRAME_DATA_MOTION_VECTORS);\n            if (sd) {\n                const AVMotionVector *mvs = (const AVMotionVector *)sd->data;\n                for (i = 0; i < sd->size / sizeof(*mvs); i++) {\n                    const AVMotionVector *mv = &mvs[i];\n                    printf(\"%d,%2d,%2d,%2d,%4d,%4d,%4d,%4d,0x%\"PRIx64\"\\n\",\n                        video_frame_count, mv->source,\n                        mv->w, mv->h, mv->src_x, mv->src_y,\n                        mv->dst_x, mv->dst_y, mv->flags);\n                }\n            }\n            av_frame_unref(frame);\n        }\n    }\n\n    return 0;\n}\n\nstatic int open_codec_context(AVFormatContext *fmt_ctx, enum AVMediaType type)\n{\n    int ret;\n    AVStream *st;\n    AVCodecContext *dec_ctx = NULL;\n    AVCodec *dec = NULL;\n    AVDictionary *opts = NULL;\n\n    ret = av_find_best_stream(fmt_ctx, type, -1, -1, &dec, 0);\n    if (ret < 0) {\n        fprintf(stderr, \"Could not find %s stream in input file '%s'\\n\",\n                av_get_media_type_string(type), src_filename);\n        return ret;\n    } else {\n        int stream_idx = ret;\n        st = fmt_ctx->streams[stream_idx];\n\n        dec_ctx = avcodec_alloc_context3(dec);\n        if (!dec_ctx) {\n            fprintf(stderr, \"Failed to allocate codec\\n\");\n            return AVERROR(EINVAL);\n        }\n\n        ret = avcodec_parameters_to_context(dec_ctx, st->codecpar);\n        if (ret < 0) {\n            fprintf(stderr, \"Failed to copy codec parameters to codec context\\n\");\n            return ret;\n        }\n\n        /* Init the video decoder */\n        av_dict_set(&opts, \"flags2\", \"+export_mvs\", 0);\n        if ((ret = avcodec_open2(dec_ctx, dec, &opts)) < 0) {\n            fprintf(stderr, \"Failed to open %s codec\\n\",\n                    av_get_media_type_string(type));\n            return ret;\n        }\n\n        video_stream_idx = stream_idx;\n        video_stream = fmt_ctx->streams[video_stream_idx];\n        video_dec_ctx = dec_ctx;\n    }\n\n    return 0;\n}\n\nint main(int argc, char **argv)\n{\n    int ret = 0;\n    AVPacket pkt = { 0 };\n\n    if (argc != 2) {\n        fprintf(stderr, \"Usage: %s <video>\\n\", argv[0]);\n        exit(1);\n    }\n    src_filename = argv[1];\n\n    if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {\n        fprintf(stderr, \"Could not open source file %s\\n\", src_filename);\n        exit(1);\n    }\n\n    if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {\n        fprintf(stderr, \"Could not find stream information\\n\");\n        exit(1);\n    }\n\n    open_codec_context(fmt_ctx, AVMEDIA_TYPE_VIDEO);\n\n    av_dump_format(fmt_ctx, 0, src_filename, 0);\n\n    if (!video_stream) {\n        fprintf(stderr, \"Could not find video stream in the input, aborting\\n\");\n        ret = 1;\n        goto end;\n    }\n\n    frame = av_frame_alloc();\n    if (!frame) {\n        fprintf(stderr, \"Could not allocate frame\\n\");\n        ret = AVERROR(ENOMEM);\n        goto end;\n    }\n\n    printf(\"framenum,source,blockw,blockh,srcx,srcy,dstx,dsty,flags\\n\");\n\n    /* read frames from the file */\n    while (av_read_frame(fmt_ctx, &pkt) >= 0) {\n        if (pkt.stream_index == video_stream_idx)\n            ret = decode_packet(&pkt);\n        av_packet_unref(&pkt);\n        if (ret < 0)\n            break;\n    }\n\n    /* flush cached frames */\n    decode_packet(NULL);\n\nend:\n    avcodec_free_context(&video_dec_ctx);\n    avformat_close_input(&fmt_ctx);\n    av_frame_free(&frame);\n    return ret < 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/filter_audio.c",
    "content": "/*\n * copyright (c) 2013 Andrew Kelley\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * libavfilter API usage example.\n *\n * @example filter_audio.c\n * This example will generate a sine wave audio,\n * pass it through a simple filter chain, and then compute the MD5 checksum of\n * the output data.\n *\n * The filter chain it uses is:\n * (input) -> abuffer -> volume -> aformat -> abuffersink -> (output)\n *\n * abuffer: This provides the endpoint where you can feed the decoded samples.\n * volume: In this example we hardcode it to 0.90.\n * aformat: This converts the samples to the samplefreq, channel layout,\n *          and sample format required by the audio device.\n * abuffersink: This provides the endpoint where you can read the samples after\n *              they have passed through the filter chain.\n */\n\n#include <inttypes.h>\n#include <math.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n#include \"libavutil/channel_layout.h\"\n#include \"libavutil/md5.h\"\n#include \"libavutil/mem.h\"\n#include \"libavutil/opt.h\"\n#include \"libavutil/samplefmt.h\"\n\n#include \"libavfilter/avfilter.h\"\n#include \"libavfilter/buffersink.h\"\n#include \"libavfilter/buffersrc.h\"\n\n#define INPUT_SAMPLERATE     48000\n#define INPUT_FORMAT         AV_SAMPLE_FMT_FLTP\n#define INPUT_CHANNEL_LAYOUT AV_CH_LAYOUT_5POINT0\n\n#define VOLUME_VAL 0.90\n\nstatic int init_filter_graph(AVFilterGraph **graph, AVFilterContext **src,\n                             AVFilterContext **sink)\n{\n    AVFilterGraph *filter_graph;\n    AVFilterContext *abuffer_ctx;\n    const AVFilter  *abuffer;\n    AVFilterContext *volume_ctx;\n    const AVFilter  *volume;\n    AVFilterContext *aformat_ctx;\n    const AVFilter  *aformat;\n    AVFilterContext *abuffersink_ctx;\n    const AVFilter  *abuffersink;\n\n    AVDictionary *options_dict = NULL;\n    uint8_t options_str[1024];\n    uint8_t ch_layout[64];\n\n    int err;\n\n    /* Create a new filtergraph, which will contain all the filters. */\n    filter_graph = avfilter_graph_alloc();\n    if (!filter_graph) {\n        fprintf(stderr, \"Unable to create filter graph.\\n\");\n        return AVERROR(ENOMEM);\n    }\n\n    /* Create the abuffer filter;\n     * it will be used for feeding the data into the graph. */\n    abuffer = avfilter_get_by_name(\"abuffer\");\n    if (!abuffer) {\n        fprintf(stderr, \"Could not find the abuffer filter.\\n\");\n        return AVERROR_FILTER_NOT_FOUND;\n    }\n\n    abuffer_ctx = avfilter_graph_alloc_filter(filter_graph, abuffer, \"src\");\n    if (!abuffer_ctx) {\n        fprintf(stderr, \"Could not allocate the abuffer instance.\\n\");\n        return AVERROR(ENOMEM);\n    }\n\n    /* Set the filter options through the AVOptions API. */\n    av_get_channel_layout_string(ch_layout, sizeof(ch_layout), 0, INPUT_CHANNEL_LAYOUT);\n    av_opt_set    (abuffer_ctx, \"channel_layout\", ch_layout,                            AV_OPT_SEARCH_CHILDREN);\n    av_opt_set    (abuffer_ctx, \"sample_fmt\",     av_get_sample_fmt_name(INPUT_FORMAT), AV_OPT_SEARCH_CHILDREN);\n    av_opt_set_q  (abuffer_ctx, \"time_base\",      (AVRational){ 1, INPUT_SAMPLERATE },  AV_OPT_SEARCH_CHILDREN);\n    av_opt_set_int(abuffer_ctx, \"sample_rate\",    INPUT_SAMPLERATE,                     AV_OPT_SEARCH_CHILDREN);\n\n    /* Now initialize the filter; we pass NULL options, since we have already\n     * set all the options above. */\n    err = avfilter_init_str(abuffer_ctx, NULL);\n    if (err < 0) {\n        fprintf(stderr, \"Could not initialize the abuffer filter.\\n\");\n        return err;\n    }\n\n    /* Create volume filter. */\n    volume = avfilter_get_by_name(\"volume\");\n    if (!volume) {\n        fprintf(stderr, \"Could not find the volume filter.\\n\");\n        return AVERROR_FILTER_NOT_FOUND;\n    }\n\n    volume_ctx = avfilter_graph_alloc_filter(filter_graph, volume, \"volume\");\n    if (!volume_ctx) {\n        fprintf(stderr, \"Could not allocate the volume instance.\\n\");\n        return AVERROR(ENOMEM);\n    }\n\n    /* A different way of passing the options is as key/value pairs in a\n     * dictionary. */\n    av_dict_set(&options_dict, \"volume\", AV_STRINGIFY(VOLUME_VAL), 0);\n    err = avfilter_init_dict(volume_ctx, &options_dict);\n    av_dict_free(&options_dict);\n    if (err < 0) {\n        fprintf(stderr, \"Could not initialize the volume filter.\\n\");\n        return err;\n    }\n\n    /* Create the aformat filter;\n     * it ensures that the output is of the format we want. */\n    aformat = avfilter_get_by_name(\"aformat\");\n    if (!aformat) {\n        fprintf(stderr, \"Could not find the aformat filter.\\n\");\n        return AVERROR_FILTER_NOT_FOUND;\n    }\n\n    aformat_ctx = avfilter_graph_alloc_filter(filter_graph, aformat, \"aformat\");\n    if (!aformat_ctx) {\n        fprintf(stderr, \"Could not allocate the aformat instance.\\n\");\n        return AVERROR(ENOMEM);\n    }\n\n    /* A third way of passing the options is in a string of the form\n     * key1=value1:key2=value2.... */\n    snprintf(options_str, sizeof(options_str),\n             \"sample_fmts=%s:sample_rates=%d:channel_layouts=0x%\"PRIx64,\n             av_get_sample_fmt_name(AV_SAMPLE_FMT_S16), 44100,\n             (uint64_t)AV_CH_LAYOUT_STEREO);\n    err = avfilter_init_str(aformat_ctx, options_str);\n    if (err < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Could not initialize the aformat filter.\\n\");\n        return err;\n    }\n\n    /* Finally create the abuffersink filter;\n     * it will be used to get the filtered data out of the graph. */\n    abuffersink = avfilter_get_by_name(\"abuffersink\");\n    if (!abuffersink) {\n        fprintf(stderr, \"Could not find the abuffersink filter.\\n\");\n        return AVERROR_FILTER_NOT_FOUND;\n    }\n\n    abuffersink_ctx = avfilter_graph_alloc_filter(filter_graph, abuffersink, \"sink\");\n    if (!abuffersink_ctx) {\n        fprintf(stderr, \"Could not allocate the abuffersink instance.\\n\");\n        return AVERROR(ENOMEM);\n    }\n\n    /* This filter takes no options. */\n    err = avfilter_init_str(abuffersink_ctx, NULL);\n    if (err < 0) {\n        fprintf(stderr, \"Could not initialize the abuffersink instance.\\n\");\n        return err;\n    }\n\n    /* Connect the filters;\n     * in this simple case the filters just form a linear chain. */\n    err = avfilter_link(abuffer_ctx, 0, volume_ctx, 0);\n    if (err >= 0)\n        err = avfilter_link(volume_ctx, 0, aformat_ctx, 0);\n    if (err >= 0)\n        err = avfilter_link(aformat_ctx, 0, abuffersink_ctx, 0);\n    if (err < 0) {\n        fprintf(stderr, \"Error connecting filters\\n\");\n        return err;\n    }\n\n    /* Configure the graph. */\n    err = avfilter_graph_config(filter_graph, NULL);\n    if (err < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Error configuring the filter graph\\n\");\n        return err;\n    }\n\n    *graph = filter_graph;\n    *src   = abuffer_ctx;\n    *sink  = abuffersink_ctx;\n\n    return 0;\n}\n\n/* Do something useful with the filtered data: this simple\n * example just prints the MD5 checksum of each plane to stdout. */\nstatic int process_output(struct AVMD5 *md5, AVFrame *frame)\n{\n    int planar     = av_sample_fmt_is_planar(frame->format);\n    int channels   = av_get_channel_layout_nb_channels(frame->channel_layout);\n    int planes     = planar ? channels : 1;\n    int bps        = av_get_bytes_per_sample(frame->format);\n    int plane_size = bps * frame->nb_samples * (planar ? 1 : channels);\n    int i, j;\n\n    for (i = 0; i < planes; i++) {\n        uint8_t checksum[16];\n\n        av_md5_init(md5);\n        av_md5_sum(checksum, frame->extended_data[i], plane_size);\n\n        fprintf(stdout, \"plane %d: 0x\", i);\n        for (j = 0; j < sizeof(checksum); j++)\n            fprintf(stdout, \"%02X\", checksum[j]);\n        fprintf(stdout, \"\\n\");\n    }\n    fprintf(stdout, \"\\n\");\n\n    return 0;\n}\n\n/* Construct a frame of audio data to be filtered;\n * this simple example just synthesizes a sine wave. */\nstatic int get_input(AVFrame *frame, int frame_num)\n{\n    int err, i, j;\n\n#define FRAME_SIZE 1024\n\n    /* Set up the frame properties and allocate the buffer for the data. */\n    frame->sample_rate    = INPUT_SAMPLERATE;\n    frame->format         = INPUT_FORMAT;\n    frame->channel_layout = INPUT_CHANNEL_LAYOUT;\n    frame->nb_samples     = FRAME_SIZE;\n    frame->pts            = frame_num * FRAME_SIZE;\n\n    err = av_frame_get_buffer(frame, 0);\n    if (err < 0)\n        return err;\n\n    /* Fill the data for each channel. */\n    for (i = 0; i < 5; i++) {\n        float *data = (float*)frame->extended_data[i];\n\n        for (j = 0; j < frame->nb_samples; j++)\n            data[j] = sin(2 * M_PI * (frame_num + j) * (i + 1) / FRAME_SIZE);\n    }\n\n    return 0;\n}\n\nint main(int argc, char *argv[])\n{\n    struct AVMD5 *md5;\n    AVFilterGraph *graph;\n    AVFilterContext *src, *sink;\n    AVFrame *frame;\n    uint8_t errstr[1024];\n    float duration;\n    int err, nb_frames, i;\n\n    if (argc < 2) {\n        fprintf(stderr, \"Usage: %s <duration>\\n\", argv[0]);\n        return 1;\n    }\n\n    duration  = atof(argv[1]);\n    nb_frames = duration * INPUT_SAMPLERATE / FRAME_SIZE;\n    if (nb_frames <= 0) {\n        fprintf(stderr, \"Invalid duration: %s\\n\", argv[1]);\n        return 1;\n    }\n\n    /* Allocate the frame we will be using to store the data. */\n    frame  = av_frame_alloc();\n    if (!frame) {\n        fprintf(stderr, \"Error allocating the frame\\n\");\n        return 1;\n    }\n\n    md5 = av_md5_alloc();\n    if (!md5) {\n        fprintf(stderr, \"Error allocating the MD5 context\\n\");\n        return 1;\n    }\n\n    /* Set up the filtergraph. */\n    err = init_filter_graph(&graph, &src, &sink);\n    if (err < 0) {\n        fprintf(stderr, \"Unable to init filter graph:\");\n        goto fail;\n    }\n\n    /* the main filtering loop */\n    for (i = 0; i < nb_frames; i++) {\n        /* get an input frame to be filtered */\n        err = get_input(frame, i);\n        if (err < 0) {\n            fprintf(stderr, \"Error generating input frame:\");\n            goto fail;\n        }\n\n        /* Send the frame to the input of the filtergraph. */\n        err = av_buffersrc_add_frame(src, frame);\n        if (err < 0) {\n            av_frame_unref(frame);\n            fprintf(stderr, \"Error submitting the frame to the filtergraph:\");\n            goto fail;\n        }\n\n        /* Get all the filtered output that is available. */\n        while ((err = av_buffersink_get_frame(sink, frame)) >= 0) {\n            /* now do something with our filtered frame */\n            err = process_output(md5, frame);\n            if (err < 0) {\n                fprintf(stderr, \"Error processing the filtered frame:\");\n                goto fail;\n            }\n            av_frame_unref(frame);\n        }\n\n        if (err == AVERROR(EAGAIN)) {\n            /* Need to feed more frames in. */\n            continue;\n        } else if (err == AVERROR_EOF) {\n            /* Nothing more to do, finish. */\n            break;\n        } else if (err < 0) {\n            /* An error occurred. */\n            fprintf(stderr, \"Error filtering the data:\");\n            goto fail;\n        }\n    }\n\n    avfilter_graph_free(&graph);\n    av_frame_free(&frame);\n    av_freep(&md5);\n\n    return 0;\n\nfail:\n    av_strerror(err, errstr, sizeof(errstr));\n    fprintf(stderr, \"%s\\n\", errstr);\n    return 1;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/filtering_audio.c",
    "content": "/*\n * Copyright (c) 2010 Nicolas George\n * Copyright (c) 2011 Stefano Sabatini\n * Copyright (c) 2012 Clément Bœsch\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @file\n * API example for audio decoding and filtering\n * @example filtering_audio.c\n */\n\n#include <unistd.h>\n\n#include <libavcodec/avcodec.h>\n#include <libavformat/avformat.h>\n#include <libavfilter/buffersink.h>\n#include <libavfilter/buffersrc.h>\n#include <libavutil/opt.h>\n\nstatic const char *filter_descr = \"aresample=8000,aformat=sample_fmts=s16:channel_layouts=mono\";\nstatic const char *player       = \"ffplay -f s16le -ar 8000 -ac 1 -\";\n\nstatic AVFormatContext *fmt_ctx;\nstatic AVCodecContext *dec_ctx;\nAVFilterContext *buffersink_ctx;\nAVFilterContext *buffersrc_ctx;\nAVFilterGraph *filter_graph;\nstatic int audio_stream_index = -1;\n\nstatic int open_input_file(const char *filename)\n{\n    int ret;\n    AVCodec *dec;\n\n    if ((ret = avformat_open_input(&fmt_ctx, filename, NULL, NULL)) < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot open input file\\n\");\n        return ret;\n    }\n\n    if ((ret = avformat_find_stream_info(fmt_ctx, NULL)) < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot find stream information\\n\");\n        return ret;\n    }\n\n    /* select the audio stream */\n    ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_AUDIO, -1, -1, &dec, 0);\n    if (ret < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot find an audio stream in the input file\\n\");\n        return ret;\n    }\n    audio_stream_index = ret;\n\n    /* create decoding context */\n    dec_ctx = avcodec_alloc_context3(dec);\n    if (!dec_ctx)\n        return AVERROR(ENOMEM);\n    avcodec_parameters_to_context(dec_ctx, fmt_ctx->streams[audio_stream_index]->codecpar);\n    av_opt_set_int(dec_ctx, \"refcounted_frames\", 1, 0);\n\n    /* init the audio decoder */\n    if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot open audio decoder\\n\");\n        return ret;\n    }\n\n    return 0;\n}\n\nstatic int init_filters(const char *filters_descr)\n{\n    char args[512];\n    int ret = 0;\n    const AVFilter *abuffersrc  = avfilter_get_by_name(\"abuffer\");\n    const AVFilter *abuffersink = avfilter_get_by_name(\"abuffersink\");\n    AVFilterInOut *outputs = avfilter_inout_alloc();\n    AVFilterInOut *inputs  = avfilter_inout_alloc();\n    static const enum AVSampleFormat out_sample_fmts[] = { AV_SAMPLE_FMT_S16, -1 };\n    static const int64_t out_channel_layouts[] = { AV_CH_LAYOUT_MONO, -1 };\n    static const int out_sample_rates[] = { 8000, -1 };\n    const AVFilterLink *outlink;\n    AVRational time_base = fmt_ctx->streams[audio_stream_index]->time_base;\n\n    filter_graph = avfilter_graph_alloc();\n    if (!outputs || !inputs || !filter_graph) {\n        ret = AVERROR(ENOMEM);\n        goto end;\n    }\n\n    /* buffer audio source: the decoded frames from the decoder will be inserted here. */\n    if (!dec_ctx->channel_layout)\n        dec_ctx->channel_layout = av_get_default_channel_layout(dec_ctx->channels);\n    snprintf(args, sizeof(args),\n            \"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%\"PRIx64,\n             time_base.num, time_base.den, dec_ctx->sample_rate,\n             av_get_sample_fmt_name(dec_ctx->sample_fmt), dec_ctx->channel_layout);\n    ret = avfilter_graph_create_filter(&buffersrc_ctx, abuffersrc, \"in\",\n                                       args, NULL, filter_graph);\n    if (ret < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot create audio buffer source\\n\");\n        goto end;\n    }\n\n    /* buffer audio sink: to terminate the filter chain. */\n    ret = avfilter_graph_create_filter(&buffersink_ctx, abuffersink, \"out\",\n                                       NULL, NULL, filter_graph);\n    if (ret < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot create audio buffer sink\\n\");\n        goto end;\n    }\n\n    ret = av_opt_set_int_list(buffersink_ctx, \"sample_fmts\", out_sample_fmts, -1,\n                              AV_OPT_SEARCH_CHILDREN);\n    if (ret < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot set output sample format\\n\");\n        goto end;\n    }\n\n    ret = av_opt_set_int_list(buffersink_ctx, \"channel_layouts\", out_channel_layouts, -1,\n                              AV_OPT_SEARCH_CHILDREN);\n    if (ret < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot set output channel layout\\n\");\n        goto end;\n    }\n\n    ret = av_opt_set_int_list(buffersink_ctx, \"sample_rates\", out_sample_rates, -1,\n                              AV_OPT_SEARCH_CHILDREN);\n    if (ret < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot set output sample rate\\n\");\n        goto end;\n    }\n\n    /*\n     * Set the endpoints for the filter graph. The filter_graph will\n     * be linked to the graph described by filters_descr.\n     */\n\n    /*\n     * The buffer source output must be connected to the input pad of\n     * the first filter described by filters_descr; since the first\n     * filter input label is not specified, it is set to \"in\" by\n     * default.\n     */\n    outputs->name       = av_strdup(\"in\");\n    outputs->filter_ctx = buffersrc_ctx;\n    outputs->pad_idx    = 0;\n    outputs->next       = NULL;\n\n    /*\n     * The buffer sink input must be connected to the output pad of\n     * the last filter described by filters_descr; since the last\n     * filter output label is not specified, it is set to \"out\" by\n     * default.\n     */\n    inputs->name       = av_strdup(\"out\");\n    inputs->filter_ctx = buffersink_ctx;\n    inputs->pad_idx    = 0;\n    inputs->next       = NULL;\n\n    if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,\n                                        &inputs, &outputs, NULL)) < 0)\n        goto end;\n\n    if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)\n        goto end;\n\n    /* Print summary of the sink buffer\n     * Note: args buffer is reused to store channel layout string */\n    outlink = buffersink_ctx->inputs[0];\n    av_get_channel_layout_string(args, sizeof(args), -1, outlink->channel_layout);\n    av_log(NULL, AV_LOG_INFO, \"Output: srate:%dHz fmt:%s chlayout:%s\\n\",\n           (int)outlink->sample_rate,\n           (char *)av_x_if_null(av_get_sample_fmt_name(outlink->format), \"?\"),\n           args);\n\nend:\n    avfilter_inout_free(&inputs);\n    avfilter_inout_free(&outputs);\n\n    return ret;\n}\n\nstatic void print_frame(const AVFrame *frame)\n{\n    const int n = frame->nb_samples * av_get_channel_layout_nb_channels(frame->channel_layout);\n    const uint16_t *p     = (uint16_t*)frame->data[0];\n    const uint16_t *p_end = p + n;\n\n    while (p < p_end) {\n        fputc(*p    & 0xff, stdout);\n        fputc(*p>>8 & 0xff, stdout);\n        p++;\n    }\n    fflush(stdout);\n}\n\nint main(int argc, char **argv)\n{\n    int ret;\n    AVPacket packet;\n    AVFrame *frame = av_frame_alloc();\n    AVFrame *filt_frame = av_frame_alloc();\n\n    if (!frame || !filt_frame) {\n        perror(\"Could not allocate frame\");\n        exit(1);\n    }\n    if (argc != 2) {\n        fprintf(stderr, \"Usage: %s file | %s\\n\", argv[0], player);\n        exit(1);\n    }\n\n    if ((ret = open_input_file(argv[1])) < 0)\n        goto end;\n    if ((ret = init_filters(filter_descr)) < 0)\n        goto end;\n\n    /* read all packets */\n    while (1) {\n        if ((ret = av_read_frame(fmt_ctx, &packet)) < 0)\n            break;\n\n        if (packet.stream_index == audio_stream_index) {\n            ret = avcodec_send_packet(dec_ctx, &packet);\n            if (ret < 0) {\n                av_log(NULL, AV_LOG_ERROR, \"Error while sending a packet to the decoder\\n\");\n                break;\n            }\n\n            while (ret >= 0) {\n                ret = avcodec_receive_frame(dec_ctx, frame);\n                if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {\n                    break;\n                } else if (ret < 0) {\n                    av_log(NULL, AV_LOG_ERROR, \"Error while receiving a frame from the decoder\\n\");\n                    goto end;\n                }\n\n                if (ret >= 0) {\n                    /* push the audio data from decoded frame into the filtergraph */\n                    if (av_buffersrc_add_frame_flags(buffersrc_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0) {\n                        av_log(NULL, AV_LOG_ERROR, \"Error while feeding the audio filtergraph\\n\");\n                        break;\n                    }\n\n                    /* pull filtered audio from the filtergraph */\n                    while (1) {\n                        ret = av_buffersink_get_frame(buffersink_ctx, filt_frame);\n                        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)\n                            break;\n                        if (ret < 0)\n                            goto end;\n                        print_frame(filt_frame);\n                        av_frame_unref(filt_frame);\n                    }\n                    av_frame_unref(frame);\n                }\n            }\n        }\n        av_packet_unref(&packet);\n    }\nend:\n    avfilter_graph_free(&filter_graph);\n    avcodec_free_context(&dec_ctx);\n    avformat_close_input(&fmt_ctx);\n    av_frame_free(&frame);\n    av_frame_free(&filt_frame);\n\n    if (ret < 0 && ret != AVERROR_EOF) {\n        fprintf(stderr, \"Error occurred: %s\\n\", av_err2str(ret));\n        exit(1);\n    }\n\n    exit(0);\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/filtering_video.c",
    "content": "/*\n * Copyright (c) 2010 Nicolas George\n * Copyright (c) 2011 Stefano Sabatini\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @file\n * API example for decoding and filtering\n * @example filtering_video.c\n */\n\n#define _XOPEN_SOURCE 600 /* for usleep */\n#include <unistd.h>\n\n#include <libavcodec/avcodec.h>\n#include <libavformat/avformat.h>\n#include <libavfilter/buffersink.h>\n#include <libavfilter/buffersrc.h>\n#include <libavutil/opt.h>\n\nconst char *filter_descr = \"scale=78:24,transpose=cclock\";\n/* other way:\n   scale=78:24 [scl]; [scl] transpose=cclock // assumes \"[in]\" and \"[out]\" to be input output pads respectively\n */\n\nstatic AVFormatContext *fmt_ctx;\nstatic AVCodecContext *dec_ctx;\nAVFilterContext *buffersink_ctx;\nAVFilterContext *buffersrc_ctx;\nAVFilterGraph *filter_graph;\nstatic int video_stream_index = -1;\nstatic int64_t last_pts = AV_NOPTS_VALUE;\n\nstatic int open_input_file(const char *filename)\n{\n    int ret;\n    AVCodec *dec;\n\n    if ((ret = avformat_open_input(&fmt_ctx, filename, NULL, NULL)) < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot open input file\\n\");\n        return ret;\n    }\n\n    if ((ret = avformat_find_stream_info(fmt_ctx, NULL)) < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot find stream information\\n\");\n        return ret;\n    }\n\n    /* select the video stream */\n    ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &dec, 0);\n    if (ret < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot find a video stream in the input file\\n\");\n        return ret;\n    }\n    video_stream_index = ret;\n\n    /* create decoding context */\n    dec_ctx = avcodec_alloc_context3(dec);\n    if (!dec_ctx)\n        return AVERROR(ENOMEM);\n    avcodec_parameters_to_context(dec_ctx, fmt_ctx->streams[video_stream_index]->codecpar);\n    av_opt_set_int(dec_ctx, \"refcounted_frames\", 1, 0);\n\n    /* init the video decoder */\n    if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot open video decoder\\n\");\n        return ret;\n    }\n\n    return 0;\n}\n\nstatic int init_filters(const char *filters_descr)\n{\n    char args[512];\n    int ret = 0;\n    const AVFilter *buffersrc  = avfilter_get_by_name(\"buffer\");\n    const AVFilter *buffersink = avfilter_get_by_name(\"buffersink\");\n    AVFilterInOut *outputs = avfilter_inout_alloc();\n    AVFilterInOut *inputs  = avfilter_inout_alloc();\n    AVRational time_base = fmt_ctx->streams[video_stream_index]->time_base;\n    enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE };\n\n    filter_graph = avfilter_graph_alloc();\n    if (!outputs || !inputs || !filter_graph) {\n        ret = AVERROR(ENOMEM);\n        goto end;\n    }\n\n    /* buffer video source: the decoded frames from the decoder will be inserted here. */\n    snprintf(args, sizeof(args),\n            \"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d\",\n            dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,\n            time_base.num, time_base.den,\n            dec_ctx->sample_aspect_ratio.num, dec_ctx->sample_aspect_ratio.den);\n\n    ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, \"in\",\n                                       args, NULL, filter_graph);\n    if (ret < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot create buffer source\\n\");\n        goto end;\n    }\n\n    /* buffer video sink: to terminate the filter chain. */\n    ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, \"out\",\n                                       NULL, NULL, filter_graph);\n    if (ret < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot create buffer sink\\n\");\n        goto end;\n    }\n\n    ret = av_opt_set_int_list(buffersink_ctx, \"pix_fmts\", pix_fmts,\n                              AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);\n    if (ret < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot set output pixel format\\n\");\n        goto end;\n    }\n\n    /*\n     * Set the endpoints for the filter graph. The filter_graph will\n     * be linked to the graph described by filters_descr.\n     */\n\n    /*\n     * The buffer source output must be connected to the input pad of\n     * the first filter described by filters_descr; since the first\n     * filter input label is not specified, it is set to \"in\" by\n     * default.\n     */\n    outputs->name       = av_strdup(\"in\");\n    outputs->filter_ctx = buffersrc_ctx;\n    outputs->pad_idx    = 0;\n    outputs->next       = NULL;\n\n    /*\n     * The buffer sink input must be connected to the output pad of\n     * the last filter described by filters_descr; since the last\n     * filter output label is not specified, it is set to \"out\" by\n     * default.\n     */\n    inputs->name       = av_strdup(\"out\");\n    inputs->filter_ctx = buffersink_ctx;\n    inputs->pad_idx    = 0;\n    inputs->next       = NULL;\n\n    if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,\n                                    &inputs, &outputs, NULL)) < 0)\n        goto end;\n\n    if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)\n        goto end;\n\nend:\n    avfilter_inout_free(&inputs);\n    avfilter_inout_free(&outputs);\n\n    return ret;\n}\n\nstatic void display_frame(const AVFrame *frame, AVRational time_base)\n{\n    int x, y;\n    uint8_t *p0, *p;\n    int64_t delay;\n\n    if (frame->pts != AV_NOPTS_VALUE) {\n        if (last_pts != AV_NOPTS_VALUE) {\n            /* sleep roughly the right amount of time;\n             * usleep is in microseconds, just like AV_TIME_BASE. */\n            delay = av_rescale_q(frame->pts - last_pts,\n                                 time_base, AV_TIME_BASE_Q);\n            if (delay > 0 && delay < 1000000)\n                usleep(delay);\n        }\n        last_pts = frame->pts;\n    }\n\n    /* Trivial ASCII grayscale display. */\n    p0 = frame->data[0];\n    puts(\"\\033c\");\n    for (y = 0; y < frame->height; y++) {\n        p = p0;\n        for (x = 0; x < frame->width; x++)\n            putchar(\" .-+#\"[*(p++) / 52]);\n        putchar('\\n');\n        p0 += frame->linesize[0];\n    }\n    fflush(stdout);\n}\n\nint main(int argc, char **argv)\n{\n    int ret;\n    AVPacket packet;\n    AVFrame *frame = av_frame_alloc();\n    AVFrame *filt_frame = av_frame_alloc();\n\n    if (!frame || !filt_frame) {\n        perror(\"Could not allocate frame\");\n        exit(1);\n    }\n    if (argc != 2) {\n        fprintf(stderr, \"Usage: %s file\\n\", argv[0]);\n        exit(1);\n    }\n\n    if ((ret = open_input_file(argv[1])) < 0)\n        goto end;\n    if ((ret = init_filters(filter_descr)) < 0)\n        goto end;\n\n    /* read all packets */\n    while (1) {\n        if ((ret = av_read_frame(fmt_ctx, &packet)) < 0)\n            break;\n\n        if (packet.stream_index == video_stream_index) {\n            ret = avcodec_send_packet(dec_ctx, &packet);\n            if (ret < 0) {\n                av_log(NULL, AV_LOG_ERROR, \"Error while sending a packet to the decoder\\n\");\n                break;\n            }\n\n            while (ret >= 0) {\n                ret = avcodec_receive_frame(dec_ctx, frame);\n                if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {\n                    break;\n                } else if (ret < 0) {\n                    av_log(NULL, AV_LOG_ERROR, \"Error while receiving a frame from the decoder\\n\");\n                    goto end;\n                }\n\n                if (ret >= 0) {\n                    frame->pts = frame->best_effort_timestamp;\n\n                    /* push the decoded frame into the filtergraph */\n                    if (av_buffersrc_add_frame_flags(buffersrc_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0) {\n                        av_log(NULL, AV_LOG_ERROR, \"Error while feeding the filtergraph\\n\");\n                        break;\n                    }\n\n                    /* pull filtered frames from the filtergraph */\n                    while (1) {\n                        ret = av_buffersink_get_frame(buffersink_ctx, filt_frame);\n                        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)\n                            break;\n                        if (ret < 0)\n                            goto end;\n                        display_frame(filt_frame, buffersink_ctx->inputs[0]->time_base);\n                        av_frame_unref(filt_frame);\n                    }\n                    av_frame_unref(frame);\n                }\n            }\n        }\n        av_packet_unref(&packet);\n    }\nend:\n    avfilter_graph_free(&filter_graph);\n    avcodec_free_context(&dec_ctx);\n    avformat_close_input(&fmt_ctx);\n    av_frame_free(&frame);\n    av_frame_free(&filt_frame);\n\n    if (ret < 0 && ret != AVERROR_EOF) {\n        fprintf(stderr, \"Error occurred: %s\\n\", av_err2str(ret));\n        exit(1);\n    }\n\n    exit(0);\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/http_multiclient.c",
    "content": "/*\n * Copyright (c) 2015 Stephan Holljes\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @file\n * libavformat multi-client network API usage example.\n *\n * @example http_multiclient.c\n * This example will serve a file without decoding or demuxing it over http.\n * Multiple clients can connect and will receive the same file.\n */\n\n#include <libavformat/avformat.h>\n#include <libavutil/opt.h>\n#include <unistd.h>\n\nstatic void process_client(AVIOContext *client, const char *in_uri)\n{\n    AVIOContext *input = NULL;\n    uint8_t buf[1024];\n    int ret, n, reply_code;\n    uint8_t *resource = NULL;\n    while ((ret = avio_handshake(client)) > 0) {\n        av_opt_get(client, \"resource\", AV_OPT_SEARCH_CHILDREN, &resource);\n        // check for strlen(resource) is necessary, because av_opt_get()\n        // may return empty string.\n        if (resource && strlen(resource))\n            break;\n        av_freep(&resource);\n    }\n    if (ret < 0)\n        goto end;\n    av_log(client, AV_LOG_TRACE, \"resource=%p\\n\", resource);\n    if (resource && resource[0] == '/' && !strcmp((resource + 1), in_uri)) {\n        reply_code = 200;\n    } else {\n        reply_code = AVERROR_HTTP_NOT_FOUND;\n    }\n    if ((ret = av_opt_set_int(client, \"reply_code\", reply_code, AV_OPT_SEARCH_CHILDREN)) < 0) {\n        av_log(client, AV_LOG_ERROR, \"Failed to set reply_code: %s.\\n\", av_err2str(ret));\n        goto end;\n    }\n    av_log(client, AV_LOG_TRACE, \"Set reply code to %d\\n\", reply_code);\n\n    while ((ret = avio_handshake(client)) > 0);\n\n    if (ret < 0)\n        goto end;\n\n    fprintf(stderr, \"Handshake performed.\\n\");\n    if (reply_code != 200)\n        goto end;\n    fprintf(stderr, \"Opening input file.\\n\");\n    if ((ret = avio_open2(&input, in_uri, AVIO_FLAG_READ, NULL, NULL)) < 0) {\n        av_log(input, AV_LOG_ERROR, \"Failed to open input: %s: %s.\\n\", in_uri,\n               av_err2str(ret));\n        goto end;\n    }\n    for(;;) {\n        n = avio_read(input, buf, sizeof(buf));\n        if (n < 0) {\n            if (n == AVERROR_EOF)\n                break;\n            av_log(input, AV_LOG_ERROR, \"Error reading from input: %s.\\n\",\n                   av_err2str(n));\n            break;\n        }\n        avio_write(client, buf, n);\n        avio_flush(client);\n    }\nend:\n    fprintf(stderr, \"Flushing client\\n\");\n    avio_flush(client);\n    fprintf(stderr, \"Closing client\\n\");\n    avio_close(client);\n    fprintf(stderr, \"Closing input\\n\");\n    avio_close(input);\n    av_freep(&resource);\n}\n\nint main(int argc, char **argv)\n{\n    AVDictionary *options = NULL;\n    AVIOContext *client = NULL, *server = NULL;\n    const char *in_uri, *out_uri;\n    int ret, pid;\n    av_log_set_level(AV_LOG_TRACE);\n    if (argc < 3) {\n        printf(\"usage: %s input http://hostname[:port]\\n\"\n               \"API example program to serve http to multiple clients.\\n\"\n               \"\\n\", argv[0]);\n        return 1;\n    }\n\n    in_uri = argv[1];\n    out_uri = argv[2];\n\n    avformat_network_init();\n\n    if ((ret = av_dict_set(&options, \"listen\", \"2\", 0)) < 0) {\n        fprintf(stderr, \"Failed to set listen mode for server: %s\\n\", av_err2str(ret));\n        return ret;\n    }\n    if ((ret = avio_open2(&server, out_uri, AVIO_FLAG_WRITE, NULL, &options)) < 0) {\n        fprintf(stderr, \"Failed to open server: %s\\n\", av_err2str(ret));\n        return ret;\n    }\n    fprintf(stderr, \"Entering main loop.\\n\");\n    for(;;) {\n        if ((ret = avio_accept(server, &client)) < 0)\n            goto end;\n        fprintf(stderr, \"Accepted client, forking process.\\n\");\n        // XXX: Since we don't reap our children and don't ignore signals\n        //      this produces zombie processes.\n        pid = fork();\n        if (pid < 0) {\n            perror(\"Fork failed\");\n            ret = AVERROR(errno);\n            goto end;\n        }\n        if (pid == 0) {\n            fprintf(stderr, \"In child.\\n\");\n            process_client(client, in_uri);\n            avio_close(server);\n            exit(0);\n        }\n        if (pid > 0)\n            avio_close(client);\n    }\nend:\n    avio_close(server);\n    if (ret < 0 && ret != AVERROR_EOF) {\n        fprintf(stderr, \"Some errors occurred: %s\\n\", av_err2str(ret));\n        return 1;\n    }\n    return 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/hw_decode.c",
    "content": "/*\n * Copyright (c) 2017 Jun Zhao\n * Copyright (c) 2017 Kaixuan Liu\n *\n * HW Acceleration API (video decoding) decode sample\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * HW-Accelerated decoding example.\n *\n * @example hw_decode.c\n * This example shows how to do HW-accelerated decoding with output\n * frames from the HW video surfaces.\n */\n\n#include <stdio.h>\n\n#include <libavcodec/avcodec.h>\n#include <libavformat/avformat.h>\n#include <libavutil/pixdesc.h>\n#include <libavutil/hwcontext.h>\n#include <libavutil/opt.h>\n#include <libavutil/avassert.h>\n#include <libavutil/imgutils.h>\n\nstatic AVBufferRef *hw_device_ctx = NULL;\nstatic enum AVPixelFormat hw_pix_fmt;\nstatic FILE *output_file = NULL;\n\nstatic int hw_decoder_init(AVCodecContext *ctx, const enum AVHWDeviceType type)\n{\n    int err = 0;\n\n    if ((err = av_hwdevice_ctx_create(&hw_device_ctx, type,\n                                      NULL, NULL, 0)) < 0) {\n        fprintf(stderr, \"Failed to create specified HW device.\\n\");\n        return err;\n    }\n    ctx->hw_device_ctx = av_buffer_ref(hw_device_ctx);\n\n    return err;\n}\n\nstatic enum AVPixelFormat get_hw_format(AVCodecContext *ctx,\n                                        const enum AVPixelFormat *pix_fmts)\n{\n    const enum AVPixelFormat *p;\n\n    for (p = pix_fmts; *p != -1; p++) {\n        if (*p == hw_pix_fmt)\n            return *p;\n    }\n\n    fprintf(stderr, \"Failed to get HW surface format.\\n\");\n    return AV_PIX_FMT_NONE;\n}\n\nstatic int decode_write(AVCodecContext *avctx, AVPacket *packet)\n{\n    AVFrame *frame = NULL, *sw_frame = NULL;\n    AVFrame *tmp_frame = NULL;\n    uint8_t *buffer = NULL;\n    int size;\n    int ret = 0;\n\n    ret = avcodec_send_packet(avctx, packet);\n    if (ret < 0) {\n        fprintf(stderr, \"Error during decoding\\n\");\n        return ret;\n    }\n\n    while (1) {\n        if (!(frame = av_frame_alloc()) || !(sw_frame = av_frame_alloc())) {\n            fprintf(stderr, \"Can not alloc frame\\n\");\n            ret = AVERROR(ENOMEM);\n            goto fail;\n        }\n\n        ret = avcodec_receive_frame(avctx, frame);\n        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {\n            av_frame_free(&frame);\n            av_frame_free(&sw_frame);\n            return 0;\n        } else if (ret < 0) {\n            fprintf(stderr, \"Error while decoding\\n\");\n            goto fail;\n        }\n\n        if (frame->format == hw_pix_fmt) {\n            /* retrieve data from GPU to CPU */\n            if ((ret = av_hwframe_transfer_data(sw_frame, frame, 0)) < 0) {\n                fprintf(stderr, \"Error transferring the data to system memory\\n\");\n                goto fail;\n            }\n            tmp_frame = sw_frame;\n        } else\n            tmp_frame = frame;\n\n        size = av_image_get_buffer_size(tmp_frame->format, tmp_frame->width,\n                                        tmp_frame->height, 1);\n        buffer = av_malloc(size);\n        if (!buffer) {\n            fprintf(stderr, \"Can not alloc buffer\\n\");\n            ret = AVERROR(ENOMEM);\n            goto fail;\n        }\n        ret = av_image_copy_to_buffer(buffer, size,\n                                      (const uint8_t * const *)tmp_frame->data,\n                                      (const int *)tmp_frame->linesize, tmp_frame->format,\n                                      tmp_frame->width, tmp_frame->height, 1);\n        if (ret < 0) {\n            fprintf(stderr, \"Can not copy image to buffer\\n\");\n            goto fail;\n        }\n\n        if ((ret = fwrite(buffer, 1, size, output_file)) < 0) {\n            fprintf(stderr, \"Failed to dump raw data.\\n\");\n            goto fail;\n        }\n\n    fail:\n        av_frame_free(&frame);\n        av_frame_free(&sw_frame);\n        av_freep(&buffer);\n        if (ret < 0)\n            return ret;\n    }\n}\n\nint main(int argc, char *argv[])\n{\n    AVFormatContext *input_ctx = NULL;\n    int video_stream, ret;\n    AVStream *video = NULL;\n    AVCodecContext *decoder_ctx = NULL;\n    AVCodec *decoder = NULL;\n    AVPacket packet;\n    enum AVHWDeviceType type;\n    int i;\n\n    if (argc < 4) {\n        fprintf(stderr, \"Usage: %s <device type> <input file> <output file>\\n\", argv[0]);\n        return -1;\n    }\n\n    type = av_hwdevice_find_type_by_name(argv[1]);\n    if (type == AV_HWDEVICE_TYPE_NONE) {\n        fprintf(stderr, \"Device type %s is not supported.\\n\", argv[1]);\n        fprintf(stderr, \"Available device types:\");\n        while((type = av_hwdevice_iterate_types(type)) != AV_HWDEVICE_TYPE_NONE)\n            fprintf(stderr, \" %s\", av_hwdevice_get_type_name(type));\n        fprintf(stderr, \"\\n\");\n        return -1;\n    }\n\n    /* open the input file */\n    if (avformat_open_input(&input_ctx, argv[2], NULL, NULL) != 0) {\n        fprintf(stderr, \"Cannot open input file '%s'\\n\", argv[2]);\n        return -1;\n    }\n\n    if (avformat_find_stream_info(input_ctx, NULL) < 0) {\n        fprintf(stderr, \"Cannot find input stream information.\\n\");\n        return -1;\n    }\n\n    /* find the video stream information */\n    ret = av_find_best_stream(input_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &decoder, 0);\n    if (ret < 0) {\n        fprintf(stderr, \"Cannot find a video stream in the input file\\n\");\n        return -1;\n    }\n    video_stream = ret;\n\n    for (i = 0;; i++) {\n        const AVCodecHWConfig *config = avcodec_get_hw_config(decoder, i);\n        if (!config) {\n            fprintf(stderr, \"Decoder %s does not support device type %s.\\n\",\n                    decoder->name, av_hwdevice_get_type_name(type));\n            return -1;\n        }\n        if (config->methods & AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX &&\n            config->device_type == type) {\n            hw_pix_fmt = config->pix_fmt;\n            break;\n        }\n    }\n\n    if (!(decoder_ctx = avcodec_alloc_context3(decoder)))\n        return AVERROR(ENOMEM);\n\n    video = input_ctx->streams[video_stream];\n    if (avcodec_parameters_to_context(decoder_ctx, video->codecpar) < 0)\n        return -1;\n\n    decoder_ctx->get_format  = get_hw_format;\n    av_opt_set_int(decoder_ctx, \"refcounted_frames\", 1, 0);\n\n    if (hw_decoder_init(decoder_ctx, type) < 0)\n        return -1;\n\n    if ((ret = avcodec_open2(decoder_ctx, decoder, NULL)) < 0) {\n        fprintf(stderr, \"Failed to open codec for stream #%u\\n\", video_stream);\n        return -1;\n    }\n\n    /* open the file to dump raw data */\n    output_file = fopen(argv[3], \"w+\");\n\n    /* actual decoding and dump the raw data */\n    while (ret >= 0) {\n        if ((ret = av_read_frame(input_ctx, &packet)) < 0)\n            break;\n\n        if (video_stream == packet.stream_index)\n            ret = decode_write(decoder_ctx, &packet);\n\n        av_packet_unref(&packet);\n    }\n\n    /* flush the decoder */\n    packet.data = NULL;\n    packet.size = 0;\n    ret = decode_write(decoder_ctx, &packet);\n    av_packet_unref(&packet);\n\n    if (output_file)\n        fclose(output_file);\n    avcodec_free_context(&decoder_ctx);\n    avformat_close_input(&input_ctx);\n    av_buffer_unref(&hw_device_ctx);\n\n    return 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/metadata.c",
    "content": "/*\n * Copyright (c) 2011 Reinhard Tartler\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @file\n * Shows how the metadata API can be used in application programs.\n * @example metadata.c\n */\n\n#include <stdio.h>\n\n#include <libavformat/avformat.h>\n#include <libavutil/dict.h>\n\nint main (int argc, char **argv)\n{\n    AVFormatContext *fmt_ctx = NULL;\n    AVDictionaryEntry *tag = NULL;\n    int ret;\n\n    if (argc != 2) {\n        printf(\"usage: %s <input_file>\\n\"\n               \"example program to demonstrate the use of the libavformat metadata API.\\n\"\n               \"\\n\", argv[0]);\n        return 1;\n    }\n\n    if ((ret = avformat_open_input(&fmt_ctx, argv[1], NULL, NULL)))\n        return ret;\n\n    while ((tag = av_dict_get(fmt_ctx->metadata, \"\", tag, AV_DICT_IGNORE_SUFFIX)))\n        printf(\"%s=%s\\n\", tag->key, tag->value);\n\n    avformat_close_input(&fmt_ctx);\n    return 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/muxing.c",
    "content": "/*\n * Copyright (c) 2003 Fabrice Bellard\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @file\n * libavformat API example.\n *\n * Output a media file in any supported libavformat format. The default\n * codecs are used.\n * @example muxing.c\n */\n\n#include <stdlib.h>\n#include <stdio.h>\n#include <string.h>\n#include <math.h>\n\n#include <libavutil/avassert.h>\n#include <libavutil/channel_layout.h>\n#include <libavutil/opt.h>\n#include <libavutil/mathematics.h>\n#include <libavutil/timestamp.h>\n#include <libavformat/avformat.h>\n#include <libswscale/swscale.h>\n#include <libswresample/swresample.h>\n\n#define STREAM_DURATION   10.0\n#define STREAM_FRAME_RATE 25 /* 25 images/s */\n#define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */\n\n#define SCALE_FLAGS SWS_BICUBIC\n\n// a wrapper around a single output AVStream\ntypedef struct OutputStream {\n    AVStream *st;\n    AVCodecContext *enc;\n\n    /* pts of the next frame that will be generated */\n    int64_t next_pts;\n    int samples_count;\n\n    AVFrame *frame;\n    AVFrame *tmp_frame;\n\n    float t, tincr, tincr2;\n\n    struct SwsContext *sws_ctx;\n    struct SwrContext *swr_ctx;\n} OutputStream;\n\nstatic void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)\n{\n    AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;\n\n    printf(\"pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\\n\",\n           av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),\n           av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),\n           av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),\n           pkt->stream_index);\n}\n\nstatic int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)\n{\n    /* rescale output packet timestamp values from codec to stream timebase */\n    av_packet_rescale_ts(pkt, *time_base, st->time_base);\n    pkt->stream_index = st->index;\n\n    /* Write the compressed frame to the media file. */\n    log_packet(fmt_ctx, pkt);\n    return av_interleaved_write_frame(fmt_ctx, pkt);\n}\n\n/* Add an output stream. */\nstatic void add_stream(OutputStream *ost, AVFormatContext *oc,\n                       AVCodec **codec,\n                       enum AVCodecID codec_id)\n{\n    AVCodecContext *c;\n    int i;\n\n    /* find the encoder */\n    *codec = avcodec_find_encoder(codec_id);\n    if (!(*codec)) {\n        fprintf(stderr, \"Could not find encoder for '%s'\\n\",\n                avcodec_get_name(codec_id));\n        exit(1);\n    }\n\n    ost->st = avformat_new_stream(oc, NULL);\n    if (!ost->st) {\n        fprintf(stderr, \"Could not allocate stream\\n\");\n        exit(1);\n    }\n    ost->st->id = oc->nb_streams-1;\n    c = avcodec_alloc_context3(*codec);\n    if (!c) {\n        fprintf(stderr, \"Could not alloc an encoding context\\n\");\n        exit(1);\n    }\n    ost->enc = c;\n\n    switch ((*codec)->type) {\n    case AVMEDIA_TYPE_AUDIO:\n        c->sample_fmt  = (*codec)->sample_fmts ?\n            (*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;\n        c->bit_rate    = 64000;\n        c->sample_rate = 44100;\n        if ((*codec)->supported_samplerates) {\n            c->sample_rate = (*codec)->supported_samplerates[0];\n            for (i = 0; (*codec)->supported_samplerates[i]; i++) {\n                if ((*codec)->supported_samplerates[i] == 44100)\n                    c->sample_rate = 44100;\n            }\n        }\n        c->channels        = av_get_channel_layout_nb_channels(c->channel_layout);\n        c->channel_layout = AV_CH_LAYOUT_STEREO;\n        if ((*codec)->channel_layouts) {\n            c->channel_layout = (*codec)->channel_layouts[0];\n            for (i = 0; (*codec)->channel_layouts[i]; i++) {\n                if ((*codec)->channel_layouts[i] == AV_CH_LAYOUT_STEREO)\n                    c->channel_layout = AV_CH_LAYOUT_STEREO;\n            }\n        }\n        c->channels        = av_get_channel_layout_nb_channels(c->channel_layout);\n        ost->st->time_base = (AVRational){ 1, c->sample_rate };\n        break;\n\n    case AVMEDIA_TYPE_VIDEO:\n        c->codec_id = codec_id;\n\n        c->bit_rate = 400000;\n        /* Resolution must be a multiple of two. */\n        c->width    = 352;\n        c->height   = 288;\n        /* timebase: This is the fundamental unit of time (in seconds) in terms\n         * of which frame timestamps are represented. For fixed-fps content,\n         * timebase should be 1/framerate and timestamp increments should be\n         * identical to 1. */\n        ost->st->time_base = (AVRational){ 1, STREAM_FRAME_RATE };\n        c->time_base       = ost->st->time_base;\n\n        c->gop_size      = 12; /* emit one intra frame every twelve frames at most */\n        c->pix_fmt       = STREAM_PIX_FMT;\n        if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {\n            /* just for testing, we also add B-frames */\n            c->max_b_frames = 2;\n        }\n        if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {\n            /* Needed to avoid using macroblocks in which some coeffs overflow.\n             * This does not happen with normal video, it just happens here as\n             * the motion of the chroma plane does not match the luma plane. */\n            c->mb_decision = 2;\n        }\n    break;\n\n    default:\n        break;\n    }\n\n    /* Some formats want stream headers to be separate. */\n    if (oc->oformat->flags & AVFMT_GLOBALHEADER)\n        c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;\n}\n\n/**************************************************************/\n/* audio output */\n\nstatic AVFrame *alloc_audio_frame(enum AVSampleFormat sample_fmt,\n                                  uint64_t channel_layout,\n                                  int sample_rate, int nb_samples)\n{\n    AVFrame *frame = av_frame_alloc();\n    int ret;\n\n    if (!frame) {\n        fprintf(stderr, \"Error allocating an audio frame\\n\");\n        exit(1);\n    }\n\n    frame->format = sample_fmt;\n    frame->channel_layout = channel_layout;\n    frame->sample_rate = sample_rate;\n    frame->nb_samples = nb_samples;\n\n    if (nb_samples) {\n        ret = av_frame_get_buffer(frame, 0);\n        if (ret < 0) {\n            fprintf(stderr, \"Error allocating an audio buffer\\n\");\n            exit(1);\n        }\n    }\n\n    return frame;\n}\n\nstatic void open_audio(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)\n{\n    AVCodecContext *c;\n    int nb_samples;\n    int ret;\n    AVDictionary *opt = NULL;\n\n    c = ost->enc;\n\n    /* open it */\n    av_dict_copy(&opt, opt_arg, 0);\n    ret = avcodec_open2(c, codec, &opt);\n    av_dict_free(&opt);\n    if (ret < 0) {\n        fprintf(stderr, \"Could not open audio codec: %s\\n\", av_err2str(ret));\n        exit(1);\n    }\n\n    /* init signal generator */\n    ost->t     = 0;\n    ost->tincr = 2 * M_PI * 110.0 / c->sample_rate;\n    /* increment frequency by 110 Hz per second */\n    ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;\n\n    if (c->codec->capabilities & AV_CODEC_CAP_VARIABLE_FRAME_SIZE)\n        nb_samples = 10000;\n    else\n        nb_samples = c->frame_size;\n\n    ost->frame     = alloc_audio_frame(c->sample_fmt, c->channel_layout,\n                                       c->sample_rate, nb_samples);\n    ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,\n                                       c->sample_rate, nb_samples);\n\n    /* copy the stream parameters to the muxer */\n    ret = avcodec_parameters_from_context(ost->st->codecpar, c);\n    if (ret < 0) {\n        fprintf(stderr, \"Could not copy the stream parameters\\n\");\n        exit(1);\n    }\n\n    /* create resampler context */\n        ost->swr_ctx = swr_alloc();\n        if (!ost->swr_ctx) {\n            fprintf(stderr, \"Could not allocate resampler context\\n\");\n            exit(1);\n        }\n\n        /* set options */\n        av_opt_set_int       (ost->swr_ctx, \"in_channel_count\",   c->channels,       0);\n        av_opt_set_int       (ost->swr_ctx, \"in_sample_rate\",     c->sample_rate,    0);\n        av_opt_set_sample_fmt(ost->swr_ctx, \"in_sample_fmt\",      AV_SAMPLE_FMT_S16, 0);\n        av_opt_set_int       (ost->swr_ctx, \"out_channel_count\",  c->channels,       0);\n        av_opt_set_int       (ost->swr_ctx, \"out_sample_rate\",    c->sample_rate,    0);\n        av_opt_set_sample_fmt(ost->swr_ctx, \"out_sample_fmt\",     c->sample_fmt,     0);\n\n        /* initialize the resampling context */\n        if ((ret = swr_init(ost->swr_ctx)) < 0) {\n            fprintf(stderr, \"Failed to initialize the resampling context\\n\");\n            exit(1);\n        }\n}\n\n/* Prepare a 16 bit dummy audio frame of 'frame_size' samples and\n * 'nb_channels' channels. */\nstatic AVFrame *get_audio_frame(OutputStream *ost)\n{\n    AVFrame *frame = ost->tmp_frame;\n    int j, i, v;\n    int16_t *q = (int16_t*)frame->data[0];\n\n    /* check if we want to generate more frames */\n    if (av_compare_ts(ost->next_pts, ost->enc->time_base,\n                      STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)\n        return NULL;\n\n    for (j = 0; j <frame->nb_samples; j++) {\n        v = (int)(sin(ost->t) * 10000);\n        for (i = 0; i < ost->enc->channels; i++)\n            *q++ = v;\n        ost->t     += ost->tincr;\n        ost->tincr += ost->tincr2;\n    }\n\n    frame->pts = ost->next_pts;\n    ost->next_pts  += frame->nb_samples;\n\n    return frame;\n}\n\n/*\n * encode one audio frame and send it to the muxer\n * return 1 when encoding is finished, 0 otherwise\n */\nstatic int write_audio_frame(AVFormatContext *oc, OutputStream *ost)\n{\n    AVCodecContext *c;\n    AVPacket pkt = { 0 }; // data and size must be 0;\n    AVFrame *frame;\n    int ret;\n    int got_packet;\n    int dst_nb_samples;\n\n    av_init_packet(&pkt);\n    c = ost->enc;\n\n    frame = get_audio_frame(ost);\n\n    if (frame) {\n        /* convert samples from native format to destination codec format, using the resampler */\n            /* compute destination number of samples */\n            dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples,\n                                            c->sample_rate, c->sample_rate, AV_ROUND_UP);\n            av_assert0(dst_nb_samples == frame->nb_samples);\n\n        /* when we pass a frame to the encoder, it may keep a reference to it\n         * internally;\n         * make sure we do not overwrite it here\n         */\n        ret = av_frame_make_writable(ost->frame);\n        if (ret < 0)\n            exit(1);\n\n        /* convert to destination format */\n        ret = swr_convert(ost->swr_ctx,\n                          ost->frame->data, dst_nb_samples,\n                          (const uint8_t **)frame->data, frame->nb_samples);\n        if (ret < 0) {\n            fprintf(stderr, \"Error while converting\\n\");\n            exit(1);\n        }\n        frame = ost->frame;\n\n        frame->pts = av_rescale_q(ost->samples_count, (AVRational){1, c->sample_rate}, c->time_base);\n        ost->samples_count += dst_nb_samples;\n    }\n\n    ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);\n    if (ret < 0) {\n        fprintf(stderr, \"Error encoding audio frame: %s\\n\", av_err2str(ret));\n        exit(1);\n    }\n\n    if (got_packet) {\n        ret = write_frame(oc, &c->time_base, ost->st, &pkt);\n        if (ret < 0) {\n            fprintf(stderr, \"Error while writing audio frame: %s\\n\",\n                    av_err2str(ret));\n            exit(1);\n        }\n    }\n\n    return (frame || got_packet) ? 0 : 1;\n}\n\n/**************************************************************/\n/* video output */\n\nstatic AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)\n{\n    AVFrame *picture;\n    int ret;\n\n    picture = av_frame_alloc();\n    if (!picture)\n        return NULL;\n\n    picture->format = pix_fmt;\n    picture->width  = width;\n    picture->height = height;\n\n    /* allocate the buffers for the frame data */\n    ret = av_frame_get_buffer(picture, 32);\n    if (ret < 0) {\n        fprintf(stderr, \"Could not allocate frame data.\\n\");\n        exit(1);\n    }\n\n    return picture;\n}\n\nstatic void open_video(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)\n{\n    int ret;\n    AVCodecContext *c = ost->enc;\n    AVDictionary *opt = NULL;\n\n    av_dict_copy(&opt, opt_arg, 0);\n\n    /* open the codec */\n    ret = avcodec_open2(c, codec, &opt);\n    av_dict_free(&opt);\n    if (ret < 0) {\n        fprintf(stderr, \"Could not open video codec: %s\\n\", av_err2str(ret));\n        exit(1);\n    }\n\n    /* allocate and init a re-usable frame */\n    ost->frame = alloc_picture(c->pix_fmt, c->width, c->height);\n    if (!ost->frame) {\n        fprintf(stderr, \"Could not allocate video frame\\n\");\n        exit(1);\n    }\n\n    /* If the output format is not YUV420P, then a temporary YUV420P\n     * picture is needed too. It is then converted to the required\n     * output format. */\n    ost->tmp_frame = NULL;\n    if (c->pix_fmt != AV_PIX_FMT_YUV420P) {\n        ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height);\n        if (!ost->tmp_frame) {\n            fprintf(stderr, \"Could not allocate temporary picture\\n\");\n            exit(1);\n        }\n    }\n\n    /* copy the stream parameters to the muxer */\n    ret = avcodec_parameters_from_context(ost->st->codecpar, c);\n    if (ret < 0) {\n        fprintf(stderr, \"Could not copy the stream parameters\\n\");\n        exit(1);\n    }\n}\n\n/* Prepare a dummy image. */\nstatic void fill_yuv_image(AVFrame *pict, int frame_index,\n                           int width, int height)\n{\n    int x, y, i;\n\n    i = frame_index;\n\n    /* Y */\n    for (y = 0; y < height; y++)\n        for (x = 0; x < width; x++)\n            pict->data[0][y * pict->linesize[0] + x] = x + y + i * 3;\n\n    /* Cb and Cr */\n    for (y = 0; y < height / 2; y++) {\n        for (x = 0; x < width / 2; x++) {\n            pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;\n            pict->data[2][y * pict->linesize[2] + x] = 64 + x + i * 5;\n        }\n    }\n}\n\nstatic AVFrame *get_video_frame(OutputStream *ost)\n{\n    AVCodecContext *c = ost->enc;\n\n    /* check if we want to generate more frames */\n    if (av_compare_ts(ost->next_pts, c->time_base,\n                      STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)\n        return NULL;\n\n    /* when we pass a frame to the encoder, it may keep a reference to it\n     * internally; make sure we do not overwrite it here */\n    if (av_frame_make_writable(ost->frame) < 0)\n        exit(1);\n\n    if (c->pix_fmt != AV_PIX_FMT_YUV420P) {\n        /* as we only generate a YUV420P picture, we must convert it\n         * to the codec pixel format if needed */\n        if (!ost->sws_ctx) {\n            ost->sws_ctx = sws_getContext(c->width, c->height,\n                                          AV_PIX_FMT_YUV420P,\n                                          c->width, c->height,\n                                          c->pix_fmt,\n                                          SCALE_FLAGS, NULL, NULL, NULL);\n            if (!ost->sws_ctx) {\n                fprintf(stderr,\n                        \"Could not initialize the conversion context\\n\");\n                exit(1);\n            }\n        }\n        fill_yuv_image(ost->tmp_frame, ost->next_pts, c->width, c->height);\n        sws_scale(ost->sws_ctx, (const uint8_t * const *) ost->tmp_frame->data,\n                  ost->tmp_frame->linesize, 0, c->height, ost->frame->data,\n                  ost->frame->linesize);\n    } else {\n        fill_yuv_image(ost->frame, ost->next_pts, c->width, c->height);\n    }\n\n    ost->frame->pts = ost->next_pts++;\n\n    return ost->frame;\n}\n\n/*\n * encode one video frame and send it to the muxer\n * return 1 when encoding is finished, 0 otherwise\n */\nstatic int write_video_frame(AVFormatContext *oc, OutputStream *ost)\n{\n    int ret;\n    AVCodecContext *c;\n    AVFrame *frame;\n    int got_packet = 0;\n    AVPacket pkt = { 0 };\n\n    c = ost->enc;\n\n    frame = get_video_frame(ost);\n\n    av_init_packet(&pkt);\n\n    /* encode the image */\n    ret = avcodec_encode_video2(c, &pkt, frame, &got_packet);\n    if (ret < 0) {\n        fprintf(stderr, \"Error encoding video frame: %s\\n\", av_err2str(ret));\n        exit(1);\n    }\n\n    if (got_packet) {\n        ret = write_frame(oc, &c->time_base, ost->st, &pkt);\n    } else {\n        ret = 0;\n    }\n\n    if (ret < 0) {\n        fprintf(stderr, \"Error while writing video frame: %s\\n\", av_err2str(ret));\n        exit(1);\n    }\n\n    return (frame || got_packet) ? 0 : 1;\n}\n\nstatic void close_stream(AVFormatContext *oc, OutputStream *ost)\n{\n    avcodec_free_context(&ost->enc);\n    av_frame_free(&ost->frame);\n    av_frame_free(&ost->tmp_frame);\n    sws_freeContext(ost->sws_ctx);\n    swr_free(&ost->swr_ctx);\n}\n\n/**************************************************************/\n/* media file output */\n\nint main(int argc, char **argv)\n{\n    OutputStream video_st = { 0 }, audio_st = { 0 };\n    const char *filename;\n    AVOutputFormat *fmt;\n    AVFormatContext *oc;\n    AVCodec *audio_codec, *video_codec;\n    int ret;\n    int have_video = 0, have_audio = 0;\n    int encode_video = 0, encode_audio = 0;\n    AVDictionary *opt = NULL;\n    int i;\n\n    if (argc < 2) {\n        printf(\"usage: %s output_file\\n\"\n               \"API example program to output a media file with libavformat.\\n\"\n               \"This program generates a synthetic audio and video stream, encodes and\\n\"\n               \"muxes them into a file named output_file.\\n\"\n               \"The output format is automatically guessed according to the file extension.\\n\"\n               \"Raw images can also be output by using '%%d' in the filename.\\n\"\n               \"\\n\", argv[0]);\n        return 1;\n    }\n\n    filename = argv[1];\n    for (i = 2; i+1 < argc; i+=2) {\n        if (!strcmp(argv[i], \"-flags\") || !strcmp(argv[i], \"-fflags\"))\n            av_dict_set(&opt, argv[i]+1, argv[i+1], 0);\n    }\n\n    /* allocate the output media context */\n    avformat_alloc_output_context2(&oc, NULL, NULL, filename);\n    if (!oc) {\n        printf(\"Could not deduce output format from file extension: using MPEG.\\n\");\n        avformat_alloc_output_context2(&oc, NULL, \"mpeg\", filename);\n    }\n    if (!oc)\n        return 1;\n\n    fmt = oc->oformat;\n\n    /* Add the audio and video streams using the default format codecs\n     * and initialize the codecs. */\n    if (fmt->video_codec != AV_CODEC_ID_NONE) {\n        add_stream(&video_st, oc, &video_codec, fmt->video_codec);\n        have_video = 1;\n        encode_video = 1;\n    }\n    if (fmt->audio_codec != AV_CODEC_ID_NONE) {\n        add_stream(&audio_st, oc, &audio_codec, fmt->audio_codec);\n        have_audio = 1;\n        encode_audio = 1;\n    }\n\n    /* Now that all the parameters are set, we can open the audio and\n     * video codecs and allocate the necessary encode buffers. */\n    if (have_video)\n        open_video(oc, video_codec, &video_st, opt);\n\n    if (have_audio)\n        open_audio(oc, audio_codec, &audio_st, opt);\n\n    av_dump_format(oc, 0, filename, 1);\n\n    /* open the output file, if needed */\n    if (!(fmt->flags & AVFMT_NOFILE)) {\n        ret = avio_open(&oc->pb, filename, AVIO_FLAG_WRITE);\n        if (ret < 0) {\n            fprintf(stderr, \"Could not open '%s': %s\\n\", filename,\n                    av_err2str(ret));\n            return 1;\n        }\n    }\n\n    /* Write the stream header, if any. */\n    ret = avformat_write_header(oc, &opt);\n    if (ret < 0) {\n        fprintf(stderr, \"Error occurred when opening output file: %s\\n\",\n                av_err2str(ret));\n        return 1;\n    }\n\n    while (encode_video || encode_audio) {\n        /* select the stream to encode */\n        if (encode_video &&\n            (!encode_audio || av_compare_ts(video_st.next_pts, video_st.enc->time_base,\n                                            audio_st.next_pts, audio_st.enc->time_base) <= 0)) {\n            encode_video = !write_video_frame(oc, &video_st);\n        } else {\n            encode_audio = !write_audio_frame(oc, &audio_st);\n        }\n    }\n\n    /* Write the trailer, if any. The trailer must be written before you\n     * close the CodecContexts open when you wrote the header; otherwise\n     * av_write_trailer() may try to use memory that was freed on\n     * av_codec_close(). */\n    av_write_trailer(oc);\n\n    /* Close each codec. */\n    if (have_video)\n        close_stream(oc, &video_st);\n    if (have_audio)\n        close_stream(oc, &audio_st);\n\n    if (!(fmt->flags & AVFMT_NOFILE))\n        /* Close the output file. */\n        avio_closep(&oc->pb);\n\n    /* free the stream */\n    avformat_free_context(oc);\n\n    return 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/qsvdec.c",
    "content": "/*\n * Copyright (c) 2015 Anton Khirnov\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @file\n * Intel QSV-accelerated H.264 decoding example.\n *\n * @example qsvdec.c\n * This example shows how to do QSV-accelerated H.264 decoding with output\n * frames in the GPU video surfaces.\n */\n\n#include \"config.h\"\n\n#include <stdio.h>\n\n#include \"libavformat/avformat.h\"\n#include \"libavformat/avio.h\"\n\n#include \"libavcodec/avcodec.h\"\n\n#include \"libavutil/buffer.h\"\n#include \"libavutil/error.h\"\n#include \"libavutil/hwcontext.h\"\n#include \"libavutil/hwcontext_qsv.h\"\n#include \"libavutil/mem.h\"\n\ntypedef struct DecodeContext {\n    AVBufferRef *hw_device_ref;\n} DecodeContext;\n\nstatic int get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)\n{\n    while (*pix_fmts != AV_PIX_FMT_NONE) {\n        if (*pix_fmts == AV_PIX_FMT_QSV) {\n            DecodeContext *decode = avctx->opaque;\n            AVHWFramesContext  *frames_ctx;\n            AVQSVFramesContext *frames_hwctx;\n            int ret;\n\n            /* create a pool of surfaces to be used by the decoder */\n            avctx->hw_frames_ctx = av_hwframe_ctx_alloc(decode->hw_device_ref);\n            if (!avctx->hw_frames_ctx)\n                return AV_PIX_FMT_NONE;\n            frames_ctx   = (AVHWFramesContext*)avctx->hw_frames_ctx->data;\n            frames_hwctx = frames_ctx->hwctx;\n\n            frames_ctx->format            = AV_PIX_FMT_QSV;\n            frames_ctx->sw_format         = avctx->sw_pix_fmt;\n            frames_ctx->width             = FFALIGN(avctx->coded_width,  32);\n            frames_ctx->height            = FFALIGN(avctx->coded_height, 32);\n            frames_ctx->initial_pool_size = 32;\n\n            frames_hwctx->frame_type = MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET;\n\n            ret = av_hwframe_ctx_init(avctx->hw_frames_ctx);\n            if (ret < 0)\n                return AV_PIX_FMT_NONE;\n\n            return AV_PIX_FMT_QSV;\n        }\n\n        pix_fmts++;\n    }\n\n    fprintf(stderr, \"The QSV pixel format not offered in get_format()\\n\");\n\n    return AV_PIX_FMT_NONE;\n}\n\nstatic int decode_packet(DecodeContext *decode, AVCodecContext *decoder_ctx,\n                         AVFrame *frame, AVFrame *sw_frame,\n                         AVPacket *pkt, AVIOContext *output_ctx)\n{\n    int ret = 0;\n\n    ret = avcodec_send_packet(decoder_ctx, pkt);\n    if (ret < 0) {\n        fprintf(stderr, \"Error during decoding\\n\");\n        return ret;\n    }\n\n    while (ret >= 0) {\n        int i, j;\n\n        ret = avcodec_receive_frame(decoder_ctx, frame);\n        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)\n            break;\n        else if (ret < 0) {\n            fprintf(stderr, \"Error during decoding\\n\");\n            return ret;\n        }\n\n        /* A real program would do something useful with the decoded frame here.\n         * We just retrieve the raw data and write it to a file, which is rather\n         * useless but pedagogic. */\n        ret = av_hwframe_transfer_data(sw_frame, frame, 0);\n        if (ret < 0) {\n            fprintf(stderr, \"Error transferring the data to system memory\\n\");\n            goto fail;\n        }\n\n        for (i = 0; i < FF_ARRAY_ELEMS(sw_frame->data) && sw_frame->data[i]; i++)\n            for (j = 0; j < (sw_frame->height >> (i > 0)); j++)\n                avio_write(output_ctx, sw_frame->data[i] + j * sw_frame->linesize[i], sw_frame->width);\n\nfail:\n        av_frame_unref(sw_frame);\n        av_frame_unref(frame);\n\n        if (ret < 0)\n            return ret;\n    }\n\n    return 0;\n}\n\nint main(int argc, char **argv)\n{\n    AVFormatContext *input_ctx = NULL;\n    AVStream *video_st = NULL;\n    AVCodecContext *decoder_ctx = NULL;\n    const AVCodec *decoder;\n\n    AVPacket pkt = { 0 };\n    AVFrame *frame = NULL, *sw_frame = NULL;\n\n    DecodeContext decode = { NULL };\n\n    AVIOContext *output_ctx = NULL;\n\n    int ret, i;\n\n    if (argc < 3) {\n        fprintf(stderr, \"Usage: %s <input file> <output file>\\n\", argv[0]);\n        return 1;\n    }\n\n    /* open the input file */\n    ret = avformat_open_input(&input_ctx, argv[1], NULL, NULL);\n    if (ret < 0) {\n        fprintf(stderr, \"Cannot open input file '%s': \", argv[1]);\n        goto finish;\n    }\n\n    /* find the first H.264 video stream */\n    for (i = 0; i < input_ctx->nb_streams; i++) {\n        AVStream *st = input_ctx->streams[i];\n\n        if (st->codecpar->codec_id == AV_CODEC_ID_H264 && !video_st)\n            video_st = st;\n        else\n            st->discard = AVDISCARD_ALL;\n    }\n    if (!video_st) {\n        fprintf(stderr, \"No H.264 video stream in the input file\\n\");\n        goto finish;\n    }\n\n    /* open the hardware device */\n    ret = av_hwdevice_ctx_create(&decode.hw_device_ref, AV_HWDEVICE_TYPE_QSV,\n                                 \"auto\", NULL, 0);\n    if (ret < 0) {\n        fprintf(stderr, \"Cannot open the hardware device\\n\");\n        goto finish;\n    }\n\n    /* initialize the decoder */\n    decoder = avcodec_find_decoder_by_name(\"h264_qsv\");\n    if (!decoder) {\n        fprintf(stderr, \"The QSV decoder is not present in libavcodec\\n\");\n        goto finish;\n    }\n\n    decoder_ctx = avcodec_alloc_context3(decoder);\n    if (!decoder_ctx) {\n        ret = AVERROR(ENOMEM);\n        goto finish;\n    }\n    decoder_ctx->codec_id = AV_CODEC_ID_H264;\n    if (video_st->codecpar->extradata_size) {\n        decoder_ctx->extradata = av_mallocz(video_st->codecpar->extradata_size +\n                                            AV_INPUT_BUFFER_PADDING_SIZE);\n        if (!decoder_ctx->extradata) {\n            ret = AVERROR(ENOMEM);\n            goto finish;\n        }\n        memcpy(decoder_ctx->extradata, video_st->codecpar->extradata,\n               video_st->codecpar->extradata_size);\n        decoder_ctx->extradata_size = video_st->codecpar->extradata_size;\n    }\n\n    decoder_ctx->opaque      = &decode;\n    decoder_ctx->get_format  = get_format;\n\n    ret = avcodec_open2(decoder_ctx, NULL, NULL);\n    if (ret < 0) {\n        fprintf(stderr, \"Error opening the decoder: \");\n        goto finish;\n    }\n\n    /* open the output stream */\n    ret = avio_open(&output_ctx, argv[2], AVIO_FLAG_WRITE);\n    if (ret < 0) {\n        fprintf(stderr, \"Error opening the output context: \");\n        goto finish;\n    }\n\n    frame    = av_frame_alloc();\n    sw_frame = av_frame_alloc();\n    if (!frame || !sw_frame) {\n        ret = AVERROR(ENOMEM);\n        goto finish;\n    }\n\n    /* actual decoding */\n    while (ret >= 0) {\n        ret = av_read_frame(input_ctx, &pkt);\n        if (ret < 0)\n            break;\n\n        if (pkt.stream_index == video_st->index)\n            ret = decode_packet(&decode, decoder_ctx, frame, sw_frame, &pkt, output_ctx);\n\n        av_packet_unref(&pkt);\n    }\n\n    /* flush the decoder */\n    pkt.data = NULL;\n    pkt.size = 0;\n    ret = decode_packet(&decode, decoder_ctx, frame, sw_frame, &pkt, output_ctx);\n\nfinish:\n    if (ret < 0) {\n        char buf[1024];\n        av_strerror(ret, buf, sizeof(buf));\n        fprintf(stderr, \"%s\\n\", buf);\n    }\n\n    avformat_close_input(&input_ctx);\n\n    av_frame_free(&frame);\n    av_frame_free(&sw_frame);\n\n    avcodec_free_context(&decoder_ctx);\n\n    av_buffer_unref(&decode.hw_device_ref);\n\n    avio_close(output_ctx);\n\n    return ret;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/remuxing.c",
    "content": "/*\n * Copyright (c) 2013 Stefano Sabatini\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @file\n * libavformat/libavcodec demuxing and muxing API example.\n *\n * Remux streams from one container format to another.\n * @example remuxing.c\n */\n\n#include <libavutil/timestamp.h>\n#include <libavformat/avformat.h>\n\nstatic void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt, const char *tag)\n{\n    AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;\n\n    printf(\"%s: pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\\n\",\n           tag,\n           av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),\n           av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),\n           av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),\n           pkt->stream_index);\n}\n\nint main(int argc, char **argv)\n{\n    AVOutputFormat *ofmt = NULL;\n    AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;\n    AVPacket pkt;\n    const char *in_filename, *out_filename;\n    int ret, i;\n    int stream_index = 0;\n    int *stream_mapping = NULL;\n    int stream_mapping_size = 0;\n\n    if (argc < 3) {\n        printf(\"usage: %s input output\\n\"\n               \"API example program to remux a media file with libavformat and libavcodec.\\n\"\n               \"The output format is guessed according to the file extension.\\n\"\n               \"\\n\", argv[0]);\n        return 1;\n    }\n\n    in_filename  = argv[1];\n    out_filename = argv[2];\n\n    if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {\n        fprintf(stderr, \"Could not open input file '%s'\", in_filename);\n        goto end;\n    }\n\n    if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {\n        fprintf(stderr, \"Failed to retrieve input stream information\");\n        goto end;\n    }\n\n    av_dump_format(ifmt_ctx, 0, in_filename, 0);\n\n    avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);\n    if (!ofmt_ctx) {\n        fprintf(stderr, \"Could not create output context\\n\");\n        ret = AVERROR_UNKNOWN;\n        goto end;\n    }\n\n    stream_mapping_size = ifmt_ctx->nb_streams;\n    stream_mapping = av_mallocz_array(stream_mapping_size, sizeof(*stream_mapping));\n    if (!stream_mapping) {\n        ret = AVERROR(ENOMEM);\n        goto end;\n    }\n\n    ofmt = ofmt_ctx->oformat;\n\n    for (i = 0; i < ifmt_ctx->nb_streams; i++) {\n        AVStream *out_stream;\n        AVStream *in_stream = ifmt_ctx->streams[i];\n        AVCodecParameters *in_codecpar = in_stream->codecpar;\n\n        if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&\n            in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&\n            in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {\n            stream_mapping[i] = -1;\n            continue;\n        }\n\n        stream_mapping[i] = stream_index++;\n\n        out_stream = avformat_new_stream(ofmt_ctx, NULL);\n        if (!out_stream) {\n            fprintf(stderr, \"Failed allocating output stream\\n\");\n            ret = AVERROR_UNKNOWN;\n            goto end;\n        }\n\n        ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);\n        if (ret < 0) {\n            fprintf(stderr, \"Failed to copy codec parameters\\n\");\n            goto end;\n        }\n        out_stream->codecpar->codec_tag = 0;\n    }\n    av_dump_format(ofmt_ctx, 0, out_filename, 1);\n\n    if (!(ofmt->flags & AVFMT_NOFILE)) {\n        ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);\n        if (ret < 0) {\n            fprintf(stderr, \"Could not open output file '%s'\", out_filename);\n            goto end;\n        }\n    }\n\n    ret = avformat_write_header(ofmt_ctx, NULL);\n    if (ret < 0) {\n        fprintf(stderr, \"Error occurred when opening output file\\n\");\n        goto end;\n    }\n\n    while (1) {\n        AVStream *in_stream, *out_stream;\n\n        ret = av_read_frame(ifmt_ctx, &pkt);\n        if (ret < 0)\n            break;\n\n        in_stream  = ifmt_ctx->streams[pkt.stream_index];\n        if (pkt.stream_index >= stream_mapping_size ||\n            stream_mapping[pkt.stream_index] < 0) {\n            av_packet_unref(&pkt);\n            continue;\n        }\n\n        pkt.stream_index = stream_mapping[pkt.stream_index];\n        out_stream = ofmt_ctx->streams[pkt.stream_index];\n        log_packet(ifmt_ctx, &pkt, \"in\");\n\n        /* copy packet */\n        pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);\n        pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);\n        pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);\n        pkt.pos = -1;\n        log_packet(ofmt_ctx, &pkt, \"out\");\n\n        ret = av_interleaved_write_frame(ofmt_ctx, &pkt);\n        if (ret < 0) {\n            fprintf(stderr, \"Error muxing packet\\n\");\n            break;\n        }\n        av_packet_unref(&pkt);\n    }\n\n    av_write_trailer(ofmt_ctx);\nend:\n\n    avformat_close_input(&ifmt_ctx);\n\n    /* close output */\n    if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))\n        avio_closep(&ofmt_ctx->pb);\n    avformat_free_context(ofmt_ctx);\n\n    av_freep(&stream_mapping);\n\n    if (ret < 0 && ret != AVERROR_EOF) {\n        fprintf(stderr, \"Error occurred: %s\\n\", av_err2str(ret));\n        return 1;\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/resampling_audio.c",
    "content": "/*\n * Copyright (c) 2012 Stefano Sabatini\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @example resampling_audio.c\n * libswresample API use example.\n */\n\n#include <libavutil/opt.h>\n#include <libavutil/channel_layout.h>\n#include <libavutil/samplefmt.h>\n#include <libswresample/swresample.h>\n\nstatic int get_format_from_sample_fmt(const char **fmt,\n                                      enum AVSampleFormat sample_fmt)\n{\n    int i;\n    struct sample_fmt_entry {\n        enum AVSampleFormat sample_fmt; const char *fmt_be, *fmt_le;\n    } sample_fmt_entries[] = {\n        { AV_SAMPLE_FMT_U8,  \"u8\",    \"u8\"    },\n        { AV_SAMPLE_FMT_S16, \"s16be\", \"s16le\" },\n        { AV_SAMPLE_FMT_S32, \"s32be\", \"s32le\" },\n        { AV_SAMPLE_FMT_FLT, \"f32be\", \"f32le\" },\n        { AV_SAMPLE_FMT_DBL, \"f64be\", \"f64le\" },\n    };\n    *fmt = NULL;\n\n    for (i = 0; i < FF_ARRAY_ELEMS(sample_fmt_entries); i++) {\n        struct sample_fmt_entry *entry = &sample_fmt_entries[i];\n        if (sample_fmt == entry->sample_fmt) {\n            *fmt = AV_NE(entry->fmt_be, entry->fmt_le);\n            return 0;\n        }\n    }\n\n    fprintf(stderr,\n            \"Sample format %s not supported as output format\\n\",\n            av_get_sample_fmt_name(sample_fmt));\n    return AVERROR(EINVAL);\n}\n\n/**\n * Fill dst buffer with nb_samples, generated starting from t.\n */\nstatic void fill_samples(double *dst, int nb_samples, int nb_channels, int sample_rate, double *t)\n{\n    int i, j;\n    double tincr = 1.0 / sample_rate, *dstp = dst;\n    const double c = 2 * M_PI * 440.0;\n\n    /* generate sin tone with 440Hz frequency and duplicated channels */\n    for (i = 0; i < nb_samples; i++) {\n        *dstp = sin(c * *t);\n        for (j = 1; j < nb_channels; j++)\n            dstp[j] = dstp[0];\n        dstp += nb_channels;\n        *t += tincr;\n    }\n}\n\nint main(int argc, char **argv)\n{\n    int64_t src_ch_layout = AV_CH_LAYOUT_STEREO, dst_ch_layout = AV_CH_LAYOUT_SURROUND;\n    int src_rate = 48000, dst_rate = 44100;\n    uint8_t **src_data = NULL, **dst_data = NULL;\n    int src_nb_channels = 0, dst_nb_channels = 0;\n    int src_linesize, dst_linesize;\n    int src_nb_samples = 1024, dst_nb_samples, max_dst_nb_samples;\n    enum AVSampleFormat src_sample_fmt = AV_SAMPLE_FMT_DBL, dst_sample_fmt = AV_SAMPLE_FMT_S16;\n    const char *dst_filename = NULL;\n    FILE *dst_file;\n    int dst_bufsize;\n    const char *fmt;\n    struct SwrContext *swr_ctx;\n    double t;\n    int ret;\n\n    if (argc != 2) {\n        fprintf(stderr, \"Usage: %s output_file\\n\"\n                \"API example program to show how to resample an audio stream with libswresample.\\n\"\n                \"This program generates a series of audio frames, resamples them to a specified \"\n                \"output format and rate and saves them to an output file named output_file.\\n\",\n            argv[0]);\n        exit(1);\n    }\n    dst_filename = argv[1];\n\n    dst_file = fopen(dst_filename, \"wb\");\n    if (!dst_file) {\n        fprintf(stderr, \"Could not open destination file %s\\n\", dst_filename);\n        exit(1);\n    }\n\n    /* create resampler context */\n    swr_ctx = swr_alloc();\n    if (!swr_ctx) {\n        fprintf(stderr, \"Could not allocate resampler context\\n\");\n        ret = AVERROR(ENOMEM);\n        goto end;\n    }\n\n    /* set options */\n    av_opt_set_int(swr_ctx, \"in_channel_layout\",    src_ch_layout, 0);\n    av_opt_set_int(swr_ctx, \"in_sample_rate\",       src_rate, 0);\n    av_opt_set_sample_fmt(swr_ctx, \"in_sample_fmt\", src_sample_fmt, 0);\n\n    av_opt_set_int(swr_ctx, \"out_channel_layout\",    dst_ch_layout, 0);\n    av_opt_set_int(swr_ctx, \"out_sample_rate\",       dst_rate, 0);\n    av_opt_set_sample_fmt(swr_ctx, \"out_sample_fmt\", dst_sample_fmt, 0);\n\n    /* initialize the resampling context */\n    if ((ret = swr_init(swr_ctx)) < 0) {\n        fprintf(stderr, \"Failed to initialize the resampling context\\n\");\n        goto end;\n    }\n\n    /* allocate source and destination samples buffers */\n\n    src_nb_channels = av_get_channel_layout_nb_channels(src_ch_layout);\n    ret = av_samples_alloc_array_and_samples(&src_data, &src_linesize, src_nb_channels,\n                                             src_nb_samples, src_sample_fmt, 0);\n    if (ret < 0) {\n        fprintf(stderr, \"Could not allocate source samples\\n\");\n        goto end;\n    }\n\n    /* compute the number of converted samples: buffering is avoided\n     * ensuring that the output buffer will contain at least all the\n     * converted input samples */\n    max_dst_nb_samples = dst_nb_samples =\n        av_rescale_rnd(src_nb_samples, dst_rate, src_rate, AV_ROUND_UP);\n\n    /* buffer is going to be directly written to a rawaudio file, no alignment */\n    dst_nb_channels = av_get_channel_layout_nb_channels(dst_ch_layout);\n    ret = av_samples_alloc_array_and_samples(&dst_data, &dst_linesize, dst_nb_channels,\n                                             dst_nb_samples, dst_sample_fmt, 0);\n    if (ret < 0) {\n        fprintf(stderr, \"Could not allocate destination samples\\n\");\n        goto end;\n    }\n\n    t = 0;\n    do {\n        /* generate synthetic audio */\n        fill_samples((double *)src_data[0], src_nb_samples, src_nb_channels, src_rate, &t);\n\n        /* compute destination number of samples */\n        dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, src_rate) +\n                                        src_nb_samples, dst_rate, src_rate, AV_ROUND_UP);\n        if (dst_nb_samples > max_dst_nb_samples) {\n            av_freep(&dst_data[0]);\n            ret = av_samples_alloc(dst_data, &dst_linesize, dst_nb_channels,\n                                   dst_nb_samples, dst_sample_fmt, 1);\n            if (ret < 0)\n                break;\n            max_dst_nb_samples = dst_nb_samples;\n        }\n\n        /* convert to destination format */\n        ret = swr_convert(swr_ctx, dst_data, dst_nb_samples, (const uint8_t **)src_data, src_nb_samples);\n        if (ret < 0) {\n            fprintf(stderr, \"Error while converting\\n\");\n            goto end;\n        }\n        dst_bufsize = av_samples_get_buffer_size(&dst_linesize, dst_nb_channels,\n                                                 ret, dst_sample_fmt, 1);\n        if (dst_bufsize < 0) {\n            fprintf(stderr, \"Could not get sample buffer size\\n\");\n            goto end;\n        }\n        printf(\"t:%f in:%d out:%d\\n\", t, src_nb_samples, ret);\n        fwrite(dst_data[0], 1, dst_bufsize, dst_file);\n    } while (t < 10);\n\n    if ((ret = get_format_from_sample_fmt(&fmt, dst_sample_fmt)) < 0)\n        goto end;\n    fprintf(stderr, \"Resampling succeeded. Play the output file with the command:\\n\"\n            \"ffplay -f %s -channel_layout %\"PRId64\" -channels %d -ar %d %s\\n\",\n            fmt, dst_ch_layout, dst_nb_channels, dst_rate, dst_filename);\n\nend:\n    fclose(dst_file);\n\n    if (src_data)\n        av_freep(&src_data[0]);\n    av_freep(&src_data);\n\n    if (dst_data)\n        av_freep(&dst_data[0]);\n    av_freep(&dst_data);\n\n    swr_free(&swr_ctx);\n    return ret < 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/scaling_video.c",
    "content": "/*\n * Copyright (c) 2012 Stefano Sabatini\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @file\n * libswscale API use example.\n * @example scaling_video.c\n */\n\n#include <libavutil/imgutils.h>\n#include <libavutil/parseutils.h>\n#include <libswscale/swscale.h>\n\nstatic void fill_yuv_image(uint8_t *data[4], int linesize[4],\n                           int width, int height, int frame_index)\n{\n    int x, y;\n\n    /* Y */\n    for (y = 0; y < height; y++)\n        for (x = 0; x < width; x++)\n            data[0][y * linesize[0] + x] = x + y + frame_index * 3;\n\n    /* Cb and Cr */\n    for (y = 0; y < height / 2; y++) {\n        for (x = 0; x < width / 2; x++) {\n            data[1][y * linesize[1] + x] = 128 + y + frame_index * 2;\n            data[2][y * linesize[2] + x] = 64 + x + frame_index * 5;\n        }\n    }\n}\n\nint main(int argc, char **argv)\n{\n    uint8_t *src_data[4], *dst_data[4];\n    int src_linesize[4], dst_linesize[4];\n    int src_w = 320, src_h = 240, dst_w, dst_h;\n    enum AVPixelFormat src_pix_fmt = AV_PIX_FMT_YUV420P, dst_pix_fmt = AV_PIX_FMT_RGB24;\n    const char *dst_size = NULL;\n    const char *dst_filename = NULL;\n    FILE *dst_file;\n    int dst_bufsize;\n    struct SwsContext *sws_ctx;\n    int i, ret;\n\n    if (argc != 3) {\n        fprintf(stderr, \"Usage: %s output_file output_size\\n\"\n                \"API example program to show how to scale an image with libswscale.\\n\"\n                \"This program generates a series of pictures, rescales them to the given \"\n                \"output_size and saves them to an output file named output_file\\n.\"\n                \"\\n\", argv[0]);\n        exit(1);\n    }\n    dst_filename = argv[1];\n    dst_size     = argv[2];\n\n    if (av_parse_video_size(&dst_w, &dst_h, dst_size) < 0) {\n        fprintf(stderr,\n                \"Invalid size '%s', must be in the form WxH or a valid size abbreviation\\n\",\n                dst_size);\n        exit(1);\n    }\n\n    dst_file = fopen(dst_filename, \"wb\");\n    if (!dst_file) {\n        fprintf(stderr, \"Could not open destination file %s\\n\", dst_filename);\n        exit(1);\n    }\n\n    /* create scaling context */\n    sws_ctx = sws_getContext(src_w, src_h, src_pix_fmt,\n                             dst_w, dst_h, dst_pix_fmt,\n                             SWS_BILINEAR, NULL, NULL, NULL);\n    if (!sws_ctx) {\n        fprintf(stderr,\n                \"Impossible to create scale context for the conversion \"\n                \"fmt:%s s:%dx%d -> fmt:%s s:%dx%d\\n\",\n                av_get_pix_fmt_name(src_pix_fmt), src_w, src_h,\n                av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h);\n        ret = AVERROR(EINVAL);\n        goto end;\n    }\n\n    /* allocate source and destination image buffers */\n    if ((ret = av_image_alloc(src_data, src_linesize,\n                              src_w, src_h, src_pix_fmt, 16)) < 0) {\n        fprintf(stderr, \"Could not allocate source image\\n\");\n        goto end;\n    }\n\n    /* buffer is going to be written to rawvideo file, no alignment */\n    if ((ret = av_image_alloc(dst_data, dst_linesize,\n                              dst_w, dst_h, dst_pix_fmt, 1)) < 0) {\n        fprintf(stderr, \"Could not allocate destination image\\n\");\n        goto end;\n    }\n    dst_bufsize = ret;\n\n    for (i = 0; i < 100; i++) {\n        /* generate synthetic video */\n        fill_yuv_image(src_data, src_linesize, src_w, src_h, i);\n\n        /* convert to destination format */\n        sws_scale(sws_ctx, (const uint8_t * const*)src_data,\n                  src_linesize, 0, src_h, dst_data, dst_linesize);\n\n        /* write scaled image to file */\n        fwrite(dst_data[0], 1, dst_bufsize, dst_file);\n    }\n\n    fprintf(stderr, \"Scaling succeeded. Play the output file with the command:\\n\"\n           \"ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\\n\",\n           av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename);\n\nend:\n    fclose(dst_file);\n    av_freep(&src_data[0]);\n    av_freep(&dst_data[0]);\n    sws_freeContext(sws_ctx);\n    return ret < 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/transcode_aac.c",
    "content": "/*\n * Copyright (c) 2013-2018 Andreas Unterweger\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * Simple audio converter\n *\n * @example transcode_aac.c\n * Convert an input audio file to AAC in an MP4 container using FFmpeg.\n * Formats other than MP4 are supported based on the output file extension.\n * @author Andreas Unterweger (dustsigns@gmail.com)\n */\n\n#include <stdio.h>\n\n#include \"libavformat/avformat.h\"\n#include \"libavformat/avio.h\"\n\n#include \"libavcodec/avcodec.h\"\n\n#include \"libavutil/audio_fifo.h\"\n#include \"libavutil/avassert.h\"\n#include \"libavutil/avstring.h\"\n#include \"libavutil/frame.h\"\n#include \"libavutil/opt.h\"\n\n#include \"libswresample/swresample.h\"\n\n/* The output bit rate in bit/s */\n#define OUTPUT_BIT_RATE 96000\n/* The number of output channels */\n#define OUTPUT_CHANNELS 2\n\n/**\n * Open an input file and the required decoder.\n * @param      filename             File to be opened\n * @param[out] input_format_context Format context of opened file\n * @param[out] input_codec_context  Codec context of opened file\n * @return Error code (0 if successful)\n */\nstatic int open_input_file(const char *filename,\n                           AVFormatContext **input_format_context,\n                           AVCodecContext **input_codec_context)\n{\n    AVCodecContext *avctx;\n    AVCodec *input_codec;\n    int error;\n\n    /* Open the input file to read from it. */\n    if ((error = avformat_open_input(input_format_context, filename, NULL,\n                                     NULL)) < 0) {\n        fprintf(stderr, \"Could not open input file '%s' (error '%s')\\n\",\n                filename, av_err2str(error));\n        *input_format_context = NULL;\n        return error;\n    }\n\n    /* Get information on the input file (number of streams etc.). */\n    if ((error = avformat_find_stream_info(*input_format_context, NULL)) < 0) {\n        fprintf(stderr, \"Could not open find stream info (error '%s')\\n\",\n                av_err2str(error));\n        avformat_close_input(input_format_context);\n        return error;\n    }\n\n    /* Make sure that there is only one stream in the input file. */\n    if ((*input_format_context)->nb_streams != 1) {\n        fprintf(stderr, \"Expected one audio input stream, but found %d\\n\",\n                (*input_format_context)->nb_streams);\n        avformat_close_input(input_format_context);\n        return AVERROR_EXIT;\n    }\n\n    /* Find a decoder for the audio stream. */\n    if (!(input_codec = avcodec_find_decoder((*input_format_context)->streams[0]->codecpar->codec_id))) {\n        fprintf(stderr, \"Could not find input codec\\n\");\n        avformat_close_input(input_format_context);\n        return AVERROR_EXIT;\n    }\n\n    /* Allocate a new decoding context. */\n    avctx = avcodec_alloc_context3(input_codec);\n    if (!avctx) {\n        fprintf(stderr, \"Could not allocate a decoding context\\n\");\n        avformat_close_input(input_format_context);\n        return AVERROR(ENOMEM);\n    }\n\n    /* Initialize the stream parameters with demuxer information. */\n    error = avcodec_parameters_to_context(avctx, (*input_format_context)->streams[0]->codecpar);\n    if (error < 0) {\n        avformat_close_input(input_format_context);\n        avcodec_free_context(&avctx);\n        return error;\n    }\n\n    /* Open the decoder for the audio stream to use it later. */\n    if ((error = avcodec_open2(avctx, input_codec, NULL)) < 0) {\n        fprintf(stderr, \"Could not open input codec (error '%s')\\n\",\n                av_err2str(error));\n        avcodec_free_context(&avctx);\n        avformat_close_input(input_format_context);\n        return error;\n    }\n\n    /* Save the decoder context for easier access later. */\n    *input_codec_context = avctx;\n\n    return 0;\n}\n\n/**\n * Open an output file and the required encoder.\n * Also set some basic encoder parameters.\n * Some of these parameters are based on the input file's parameters.\n * @param      filename              File to be opened\n * @param      input_codec_context   Codec context of input file\n * @param[out] output_format_context Format context of output file\n * @param[out] output_codec_context  Codec context of output file\n * @return Error code (0 if successful)\n */\nstatic int open_output_file(const char *filename,\n                            AVCodecContext *input_codec_context,\n                            AVFormatContext **output_format_context,\n                            AVCodecContext **output_codec_context)\n{\n    AVCodecContext *avctx          = NULL;\n    AVIOContext *output_io_context = NULL;\n    AVStream *stream               = NULL;\n    AVCodec *output_codec          = NULL;\n    int error;\n\n    /* Open the output file to write to it. */\n    if ((error = avio_open(&output_io_context, filename,\n                           AVIO_FLAG_WRITE)) < 0) {\n        fprintf(stderr, \"Could not open output file '%s' (error '%s')\\n\",\n                filename, av_err2str(error));\n        return error;\n    }\n\n    /* Create a new format context for the output container format. */\n    if (!(*output_format_context = avformat_alloc_context())) {\n        fprintf(stderr, \"Could not allocate output format context\\n\");\n        return AVERROR(ENOMEM);\n    }\n\n    /* Associate the output file (pointer) with the container format context. */\n    (*output_format_context)->pb = output_io_context;\n\n    /* Guess the desired container format based on the file extension. */\n    if (!((*output_format_context)->oformat = av_guess_format(NULL, filename,\n                                                              NULL))) {\n        fprintf(stderr, \"Could not find output file format\\n\");\n        goto cleanup;\n    }\n\n    if (!((*output_format_context)->url = av_strdup(filename))) {\n        fprintf(stderr, \"Could not allocate url.\\n\");\n        error = AVERROR(ENOMEM);\n        goto cleanup;\n    }\n\n    /* Find the encoder to be used by its name. */\n    if (!(output_codec = avcodec_find_encoder(AV_CODEC_ID_AAC))) {\n        fprintf(stderr, \"Could not find an AAC encoder.\\n\");\n        goto cleanup;\n    }\n\n    /* Create a new audio stream in the output file container. */\n    if (!(stream = avformat_new_stream(*output_format_context, NULL))) {\n        fprintf(stderr, \"Could not create new stream\\n\");\n        error = AVERROR(ENOMEM);\n        goto cleanup;\n    }\n\n    avctx = avcodec_alloc_context3(output_codec);\n    if (!avctx) {\n        fprintf(stderr, \"Could not allocate an encoding context\\n\");\n        error = AVERROR(ENOMEM);\n        goto cleanup;\n    }\n\n    /* Set the basic encoder parameters.\n     * The input file's sample rate is used to avoid a sample rate conversion. */\n    avctx->channels       = OUTPUT_CHANNELS;\n    avctx->channel_layout = av_get_default_channel_layout(OUTPUT_CHANNELS);\n    avctx->sample_rate    = input_codec_context->sample_rate;\n    avctx->sample_fmt     = output_codec->sample_fmts[0];\n    avctx->bit_rate       = OUTPUT_BIT_RATE;\n\n    /* Allow the use of the experimental AAC encoder. */\n    avctx->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;\n\n    /* Set the sample rate for the container. */\n    stream->time_base.den = input_codec_context->sample_rate;\n    stream->time_base.num = 1;\n\n    /* Some container formats (like MP4) require global headers to be present.\n     * Mark the encoder so that it behaves accordingly. */\n    if ((*output_format_context)->oformat->flags & AVFMT_GLOBALHEADER)\n        avctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;\n\n    /* Open the encoder for the audio stream to use it later. */\n    if ((error = avcodec_open2(avctx, output_codec, NULL)) < 0) {\n        fprintf(stderr, \"Could not open output codec (error '%s')\\n\",\n                av_err2str(error));\n        goto cleanup;\n    }\n\n    error = avcodec_parameters_from_context(stream->codecpar, avctx);\n    if (error < 0) {\n        fprintf(stderr, \"Could not initialize stream parameters\\n\");\n        goto cleanup;\n    }\n\n    /* Save the encoder context for easier access later. */\n    *output_codec_context = avctx;\n\n    return 0;\n\ncleanup:\n    avcodec_free_context(&avctx);\n    avio_closep(&(*output_format_context)->pb);\n    avformat_free_context(*output_format_context);\n    *output_format_context = NULL;\n    return error < 0 ? error : AVERROR_EXIT;\n}\n\n/**\n * Initialize one data packet for reading or writing.\n * @param packet Packet to be initialized\n */\nstatic void init_packet(AVPacket *packet)\n{\n    av_init_packet(packet);\n    /* Set the packet data and size so that it is recognized as being empty. */\n    packet->data = NULL;\n    packet->size = 0;\n}\n\n/**\n * Initialize one audio frame for reading from the input file.\n * @param[out] frame Frame to be initialized\n * @return Error code (0 if successful)\n */\nstatic int init_input_frame(AVFrame **frame)\n{\n    if (!(*frame = av_frame_alloc())) {\n        fprintf(stderr, \"Could not allocate input frame\\n\");\n        return AVERROR(ENOMEM);\n    }\n    return 0;\n}\n\n/**\n * Initialize the audio resampler based on the input and output codec settings.\n * If the input and output sample formats differ, a conversion is required\n * libswresample takes care of this, but requires initialization.\n * @param      input_codec_context  Codec context of the input file\n * @param      output_codec_context Codec context of the output file\n * @param[out] resample_context     Resample context for the required conversion\n * @return Error code (0 if successful)\n */\nstatic int init_resampler(AVCodecContext *input_codec_context,\n                          AVCodecContext *output_codec_context,\n                          SwrContext **resample_context)\n{\n        int error;\n\n        /*\n         * Create a resampler context for the conversion.\n         * Set the conversion parameters.\n         * Default channel layouts based on the number of channels\n         * are assumed for simplicity (they are sometimes not detected\n         * properly by the demuxer and/or decoder).\n         */\n        *resample_context = swr_alloc_set_opts(NULL,\n                                              av_get_default_channel_layout(output_codec_context->channels),\n                                              output_codec_context->sample_fmt,\n                                              output_codec_context->sample_rate,\n                                              av_get_default_channel_layout(input_codec_context->channels),\n                                              input_codec_context->sample_fmt,\n                                              input_codec_context->sample_rate,\n                                              0, NULL);\n        if (!*resample_context) {\n            fprintf(stderr, \"Could not allocate resample context\\n\");\n            return AVERROR(ENOMEM);\n        }\n        /*\n        * Perform a sanity check so that the number of converted samples is\n        * not greater than the number of samples to be converted.\n        * If the sample rates differ, this case has to be handled differently\n        */\n        av_assert0(output_codec_context->sample_rate == input_codec_context->sample_rate);\n\n        /* Open the resampler with the specified parameters. */\n        if ((error = swr_init(*resample_context)) < 0) {\n            fprintf(stderr, \"Could not open resample context\\n\");\n            swr_free(resample_context);\n            return error;\n        }\n    return 0;\n}\n\n/**\n * Initialize a FIFO buffer for the audio samples to be encoded.\n * @param[out] fifo                 Sample buffer\n * @param      output_codec_context Codec context of the output file\n * @return Error code (0 if successful)\n */\nstatic int init_fifo(AVAudioFifo **fifo, AVCodecContext *output_codec_context)\n{\n    /* Create the FIFO buffer based on the specified output sample format. */\n    if (!(*fifo = av_audio_fifo_alloc(output_codec_context->sample_fmt,\n                                      output_codec_context->channels, 1))) {\n        fprintf(stderr, \"Could not allocate FIFO\\n\");\n        return AVERROR(ENOMEM);\n    }\n    return 0;\n}\n\n/**\n * Write the header of the output file container.\n * @param output_format_context Format context of the output file\n * @return Error code (0 if successful)\n */\nstatic int write_output_file_header(AVFormatContext *output_format_context)\n{\n    int error;\n    if ((error = avformat_write_header(output_format_context, NULL)) < 0) {\n        fprintf(stderr, \"Could not write output file header (error '%s')\\n\",\n                av_err2str(error));\n        return error;\n    }\n    return 0;\n}\n\n/**\n * Decode one audio frame from the input file.\n * @param      frame                Audio frame to be decoded\n * @param      input_format_context Format context of the input file\n * @param      input_codec_context  Codec context of the input file\n * @param[out] data_present         Indicates whether data has been decoded\n * @param[out] finished             Indicates whether the end of file has\n *                                  been reached and all data has been\n *                                  decoded. If this flag is false, there\n *                                  is more data to be decoded, i.e., this\n *                                  function has to be called again.\n * @return Error code (0 if successful)\n */\nstatic int decode_audio_frame(AVFrame *frame,\n                              AVFormatContext *input_format_context,\n                              AVCodecContext *input_codec_context,\n                              int *data_present, int *finished)\n{\n    /* Packet used for temporary storage. */\n    AVPacket input_packet;\n    int error;\n    init_packet(&input_packet);\n\n    /* Read one audio frame from the input file into a temporary packet. */\n    if ((error = av_read_frame(input_format_context, &input_packet)) < 0) {\n        /* If we are at the end of the file, flush the decoder below. */\n        if (error == AVERROR_EOF)\n            *finished = 1;\n        else {\n            fprintf(stderr, \"Could not read frame (error '%s')\\n\",\n                    av_err2str(error));\n            return error;\n        }\n    }\n\n    /* Send the audio frame stored in the temporary packet to the decoder.\n     * The input audio stream decoder is used to do this. */\n    if ((error = avcodec_send_packet(input_codec_context, &input_packet)) < 0) {\n        fprintf(stderr, \"Could not send packet for decoding (error '%s')\\n\",\n                av_err2str(error));\n        return error;\n    }\n\n    /* Receive one frame from the decoder. */\n    error = avcodec_receive_frame(input_codec_context, frame);\n    /* If the decoder asks for more data to be able to decode a frame,\n     * return indicating that no data is present. */\n    if (error == AVERROR(EAGAIN)) {\n        error = 0;\n        goto cleanup;\n    /* If the end of the input file is reached, stop decoding. */\n    } else if (error == AVERROR_EOF) {\n        *finished = 1;\n        error = 0;\n        goto cleanup;\n    } else if (error < 0) {\n        fprintf(stderr, \"Could not decode frame (error '%s')\\n\",\n                av_err2str(error));\n        goto cleanup;\n    /* Default case: Return decoded data. */\n    } else {\n        *data_present = 1;\n        goto cleanup;\n    }\n\ncleanup:\n    av_packet_unref(&input_packet);\n    return error;\n}\n\n/**\n * Initialize a temporary storage for the specified number of audio samples.\n * The conversion requires temporary storage due to the different format.\n * The number of audio samples to be allocated is specified in frame_size.\n * @param[out] converted_input_samples Array of converted samples. The\n *                                     dimensions are reference, channel\n *                                     (for multi-channel audio), sample.\n * @param      output_codec_context    Codec context of the output file\n * @param      frame_size              Number of samples to be converted in\n *                                     each round\n * @return Error code (0 if successful)\n */\nstatic int init_converted_samples(uint8_t ***converted_input_samples,\n                                  AVCodecContext *output_codec_context,\n                                  int frame_size)\n{\n    int error;\n\n    /* Allocate as many pointers as there are audio channels.\n     * Each pointer will later point to the audio samples of the corresponding\n     * channels (although it may be NULL for interleaved formats).\n     */\n    if (!(*converted_input_samples = calloc(output_codec_context->channels,\n                                            sizeof(**converted_input_samples)))) {\n        fprintf(stderr, \"Could not allocate converted input sample pointers\\n\");\n        return AVERROR(ENOMEM);\n    }\n\n    /* Allocate memory for the samples of all channels in one consecutive\n     * block for convenience. */\n    if ((error = av_samples_alloc(*converted_input_samples, NULL,\n                                  output_codec_context->channels,\n                                  frame_size,\n                                  output_codec_context->sample_fmt, 0)) < 0) {\n        fprintf(stderr,\n                \"Could not allocate converted input samples (error '%s')\\n\",\n                av_err2str(error));\n        av_freep(&(*converted_input_samples)[0]);\n        free(*converted_input_samples);\n        return error;\n    }\n    return 0;\n}\n\n/**\n * Convert the input audio samples into the output sample format.\n * The conversion happens on a per-frame basis, the size of which is\n * specified by frame_size.\n * @param      input_data       Samples to be decoded. The dimensions are\n *                              channel (for multi-channel audio), sample.\n * @param[out] converted_data   Converted samples. The dimensions are channel\n *                              (for multi-channel audio), sample.\n * @param      frame_size       Number of samples to be converted\n * @param      resample_context Resample context for the conversion\n * @return Error code (0 if successful)\n */\nstatic int convert_samples(const uint8_t **input_data,\n                           uint8_t **converted_data, const int frame_size,\n                           SwrContext *resample_context)\n{\n    int error;\n\n    /* Convert the samples using the resampler. */\n    if ((error = swr_convert(resample_context,\n                             converted_data, frame_size,\n                             input_data    , frame_size)) < 0) {\n        fprintf(stderr, \"Could not convert input samples (error '%s')\\n\",\n                av_err2str(error));\n        return error;\n    }\n\n    return 0;\n}\n\n/**\n * Add converted input audio samples to the FIFO buffer for later processing.\n * @param fifo                    Buffer to add the samples to\n * @param converted_input_samples Samples to be added. The dimensions are channel\n *                                (for multi-channel audio), sample.\n * @param frame_size              Number of samples to be converted\n * @return Error code (0 if successful)\n */\nstatic int add_samples_to_fifo(AVAudioFifo *fifo,\n                               uint8_t **converted_input_samples,\n                               const int frame_size)\n{\n    int error;\n\n    /* Make the FIFO as large as it needs to be to hold both,\n     * the old and the new samples. */\n    if ((error = av_audio_fifo_realloc(fifo, av_audio_fifo_size(fifo) + frame_size)) < 0) {\n        fprintf(stderr, \"Could not reallocate FIFO\\n\");\n        return error;\n    }\n\n    /* Store the new samples in the FIFO buffer. */\n    if (av_audio_fifo_write(fifo, (void **)converted_input_samples,\n                            frame_size) < frame_size) {\n        fprintf(stderr, \"Could not write data to FIFO\\n\");\n        return AVERROR_EXIT;\n    }\n    return 0;\n}\n\n/**\n * Read one audio frame from the input file, decode, convert and store\n * it in the FIFO buffer.\n * @param      fifo                 Buffer used for temporary storage\n * @param      input_format_context Format context of the input file\n * @param      input_codec_context  Codec context of the input file\n * @param      output_codec_context Codec context of the output file\n * @param      resampler_context    Resample context for the conversion\n * @param[out] finished             Indicates whether the end of file has\n *                                  been reached and all data has been\n *                                  decoded. If this flag is false,\n *                                  there is more data to be decoded,\n *                                  i.e., this function has to be called\n *                                  again.\n * @return Error code (0 if successful)\n */\nstatic int read_decode_convert_and_store(AVAudioFifo *fifo,\n                                         AVFormatContext *input_format_context,\n                                         AVCodecContext *input_codec_context,\n                                         AVCodecContext *output_codec_context,\n                                         SwrContext *resampler_context,\n                                         int *finished)\n{\n    /* Temporary storage of the input samples of the frame read from the file. */\n    AVFrame *input_frame = NULL;\n    /* Temporary storage for the converted input samples. */\n    uint8_t **converted_input_samples = NULL;\n    int data_present = 0;\n    int ret = AVERROR_EXIT;\n\n    /* Initialize temporary storage for one input frame. */\n    if (init_input_frame(&input_frame))\n        goto cleanup;\n    /* Decode one frame worth of audio samples. */\n    if (decode_audio_frame(input_frame, input_format_context,\n                           input_codec_context, &data_present, finished))\n        goto cleanup;\n    /* If we are at the end of the file and there are no more samples\n     * in the decoder which are delayed, we are actually finished.\n     * This must not be treated as an error. */\n    if (*finished) {\n        ret = 0;\n        goto cleanup;\n    }\n    /* If there is decoded data, convert and store it. */\n    if (data_present) {\n        /* Initialize the temporary storage for the converted input samples. */\n        if (init_converted_samples(&converted_input_samples, output_codec_context,\n                                   input_frame->nb_samples))\n            goto cleanup;\n\n        /* Convert the input samples to the desired output sample format.\n         * This requires a temporary storage provided by converted_input_samples. */\n        if (convert_samples((const uint8_t**)input_frame->extended_data, converted_input_samples,\n                            input_frame->nb_samples, resampler_context))\n            goto cleanup;\n\n        /* Add the converted input samples to the FIFO buffer for later processing. */\n        if (add_samples_to_fifo(fifo, converted_input_samples,\n                                input_frame->nb_samples))\n            goto cleanup;\n        ret = 0;\n    }\n    ret = 0;\n\ncleanup:\n    if (converted_input_samples) {\n        av_freep(&converted_input_samples[0]);\n        free(converted_input_samples);\n    }\n    av_frame_free(&input_frame);\n\n    return ret;\n}\n\n/**\n * Initialize one input frame for writing to the output file.\n * The frame will be exactly frame_size samples large.\n * @param[out] frame                Frame to be initialized\n * @param      output_codec_context Codec context of the output file\n * @param      frame_size           Size of the frame\n * @return Error code (0 if successful)\n */\nstatic int init_output_frame(AVFrame **frame,\n                             AVCodecContext *output_codec_context,\n                             int frame_size)\n{\n    int error;\n\n    /* Create a new frame to store the audio samples. */\n    if (!(*frame = av_frame_alloc())) {\n        fprintf(stderr, \"Could not allocate output frame\\n\");\n        return AVERROR_EXIT;\n    }\n\n    /* Set the frame's parameters, especially its size and format.\n     * av_frame_get_buffer needs this to allocate memory for the\n     * audio samples of the frame.\n     * Default channel layouts based on the number of channels\n     * are assumed for simplicity. */\n    (*frame)->nb_samples     = frame_size;\n    (*frame)->channel_layout = output_codec_context->channel_layout;\n    (*frame)->format         = output_codec_context->sample_fmt;\n    (*frame)->sample_rate    = output_codec_context->sample_rate;\n\n    /* Allocate the samples of the created frame. This call will make\n     * sure that the audio frame can hold as many samples as specified. */\n    if ((error = av_frame_get_buffer(*frame, 0)) < 0) {\n        fprintf(stderr, \"Could not allocate output frame samples (error '%s')\\n\",\n                av_err2str(error));\n        av_frame_free(frame);\n        return error;\n    }\n\n    return 0;\n}\n\n/* Global timestamp for the audio frames. */\nstatic int64_t pts = 0;\n\n/**\n * Encode one frame worth of audio to the output file.\n * @param      frame                 Samples to be encoded\n * @param      output_format_context Format context of the output file\n * @param      output_codec_context  Codec context of the output file\n * @param[out] data_present          Indicates whether data has been\n *                                   encoded\n * @return Error code (0 if successful)\n */\nstatic int encode_audio_frame(AVFrame *frame,\n                              AVFormatContext *output_format_context,\n                              AVCodecContext *output_codec_context,\n                              int *data_present)\n{\n    /* Packet used for temporary storage. */\n    AVPacket output_packet;\n    int error;\n    init_packet(&output_packet);\n\n    /* Set a timestamp based on the sample rate for the container. */\n    if (frame) {\n        frame->pts = pts;\n        pts += frame->nb_samples;\n    }\n\n    /* Send the audio frame stored in the temporary packet to the encoder.\n     * The output audio stream encoder is used to do this. */\n    error = avcodec_send_frame(output_codec_context, frame);\n    /* The encoder signals that it has nothing more to encode. */\n    if (error == AVERROR_EOF) {\n        error = 0;\n        goto cleanup;\n    } else if (error < 0) {\n        fprintf(stderr, \"Could not send packet for encoding (error '%s')\\n\",\n                av_err2str(error));\n        return error;\n    }\n\n    /* Receive one encoded frame from the encoder. */\n    error = avcodec_receive_packet(output_codec_context, &output_packet);\n    /* If the encoder asks for more data to be able to provide an\n     * encoded frame, return indicating that no data is present. */\n    if (error == AVERROR(EAGAIN)) {\n        error = 0;\n        goto cleanup;\n    /* If the last frame has been encoded, stop encoding. */\n    } else if (error == AVERROR_EOF) {\n        error = 0;\n        goto cleanup;\n    } else if (error < 0) {\n        fprintf(stderr, \"Could not encode frame (error '%s')\\n\",\n                av_err2str(error));\n        goto cleanup;\n    /* Default case: Return encoded data. */\n    } else {\n        *data_present = 1;\n    }\n\n    /* Write one audio frame from the temporary packet to the output file. */\n    if (*data_present &&\n        (error = av_write_frame(output_format_context, &output_packet)) < 0) {\n        fprintf(stderr, \"Could not write frame (error '%s')\\n\",\n                av_err2str(error));\n        goto cleanup;\n    }\n\ncleanup:\n    av_packet_unref(&output_packet);\n    return error;\n}\n\n/**\n * Load one audio frame from the FIFO buffer, encode and write it to the\n * output file.\n * @param fifo                  Buffer used for temporary storage\n * @param output_format_context Format context of the output file\n * @param output_codec_context  Codec context of the output file\n * @return Error code (0 if successful)\n */\nstatic int load_encode_and_write(AVAudioFifo *fifo,\n                                 AVFormatContext *output_format_context,\n                                 AVCodecContext *output_codec_context)\n{\n    /* Temporary storage of the output samples of the frame written to the file. */\n    AVFrame *output_frame;\n    /* Use the maximum number of possible samples per frame.\n     * If there is less than the maximum possible frame size in the FIFO\n     * buffer use this number. Otherwise, use the maximum possible frame size. */\n    const int frame_size = FFMIN(av_audio_fifo_size(fifo),\n                                 output_codec_context->frame_size);\n    int data_written;\n\n    /* Initialize temporary storage for one output frame. */\n    if (init_output_frame(&output_frame, output_codec_context, frame_size))\n        return AVERROR_EXIT;\n\n    /* Read as many samples from the FIFO buffer as required to fill the frame.\n     * The samples are stored in the frame temporarily. */\n    if (av_audio_fifo_read(fifo, (void **)output_frame->data, frame_size) < frame_size) {\n        fprintf(stderr, \"Could not read data from FIFO\\n\");\n        av_frame_free(&output_frame);\n        return AVERROR_EXIT;\n    }\n\n    /* Encode one frame worth of audio samples. */\n    if (encode_audio_frame(output_frame, output_format_context,\n                           output_codec_context, &data_written)) {\n        av_frame_free(&output_frame);\n        return AVERROR_EXIT;\n    }\n    av_frame_free(&output_frame);\n    return 0;\n}\n\n/**\n * Write the trailer of the output file container.\n * @param output_format_context Format context of the output file\n * @return Error code (0 if successful)\n */\nstatic int write_output_file_trailer(AVFormatContext *output_format_context)\n{\n    int error;\n    if ((error = av_write_trailer(output_format_context)) < 0) {\n        fprintf(stderr, \"Could not write output file trailer (error '%s')\\n\",\n                av_err2str(error));\n        return error;\n    }\n    return 0;\n}\n\nint main(int argc, char **argv)\n{\n    AVFormatContext *input_format_context = NULL, *output_format_context = NULL;\n    AVCodecContext *input_codec_context = NULL, *output_codec_context = NULL;\n    SwrContext *resample_context = NULL;\n    AVAudioFifo *fifo = NULL;\n    int ret = AVERROR_EXIT;\n\n    if (argc != 3) {\n        fprintf(stderr, \"Usage: %s <input file> <output file>\\n\", argv[0]);\n        exit(1);\n    }\n\n    /* Open the input file for reading. */\n    if (open_input_file(argv[1], &input_format_context,\n                        &input_codec_context))\n        goto cleanup;\n    /* Open the output file for writing. */\n    if (open_output_file(argv[2], input_codec_context,\n                         &output_format_context, &output_codec_context))\n        goto cleanup;\n    /* Initialize the resampler to be able to convert audio sample formats. */\n    if (init_resampler(input_codec_context, output_codec_context,\n                       &resample_context))\n        goto cleanup;\n    /* Initialize the FIFO buffer to store audio samples to be encoded. */\n    if (init_fifo(&fifo, output_codec_context))\n        goto cleanup;\n    /* Write the header of the output file container. */\n    if (write_output_file_header(output_format_context))\n        goto cleanup;\n\n    /* Loop as long as we have input samples to read or output samples\n     * to write; abort as soon as we have neither. */\n    while (1) {\n        /* Use the encoder's desired frame size for processing. */\n        const int output_frame_size = output_codec_context->frame_size;\n        int finished                = 0;\n\n        /* Make sure that there is one frame worth of samples in the FIFO\n         * buffer so that the encoder can do its work.\n         * Since the decoder's and the encoder's frame size may differ, we\n         * need to FIFO buffer to store as many frames worth of input samples\n         * that they make up at least one frame worth of output samples. */\n        while (av_audio_fifo_size(fifo) < output_frame_size) {\n            /* Decode one frame worth of audio samples, convert it to the\n             * output sample format and put it into the FIFO buffer. */\n            if (read_decode_convert_and_store(fifo, input_format_context,\n                                              input_codec_context,\n                                              output_codec_context,\n                                              resample_context, &finished))\n                goto cleanup;\n\n            /* If we are at the end of the input file, we continue\n             * encoding the remaining audio samples to the output file. */\n            if (finished)\n                break;\n        }\n\n        /* If we have enough samples for the encoder, we encode them.\n         * At the end of the file, we pass the remaining samples to\n         * the encoder. */\n        while (av_audio_fifo_size(fifo) >= output_frame_size ||\n               (finished && av_audio_fifo_size(fifo) > 0))\n            /* Take one frame worth of audio samples from the FIFO buffer,\n             * encode it and write it to the output file. */\n            if (load_encode_and_write(fifo, output_format_context,\n                                      output_codec_context))\n                goto cleanup;\n\n        /* If we are at the end of the input file and have encoded\n         * all remaining samples, we can exit this loop and finish. */\n        if (finished) {\n            int data_written;\n            /* Flush the encoder as it may have delayed frames. */\n            do {\n                data_written = 0;\n                if (encode_audio_frame(NULL, output_format_context,\n                                       output_codec_context, &data_written))\n                    goto cleanup;\n            } while (data_written);\n            break;\n        }\n    }\n\n    /* Write the trailer of the output file container. */\n    if (write_output_file_trailer(output_format_context))\n        goto cleanup;\n    ret = 0;\n\ncleanup:\n    if (fifo)\n        av_audio_fifo_free(fifo);\n    swr_free(&resample_context);\n    if (output_codec_context)\n        avcodec_free_context(&output_codec_context);\n    if (output_format_context) {\n        avio_closep(&output_format_context->pb);\n        avformat_free_context(output_format_context);\n    }\n    if (input_codec_context)\n        avcodec_free_context(&input_codec_context);\n    if (input_format_context)\n        avformat_close_input(&input_format_context);\n\n    return ret;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/transcoding.c",
    "content": "/*\n * Copyright (c) 2010 Nicolas George\n * Copyright (c) 2011 Stefano Sabatini\n * Copyright (c) 2014 Andrey Utkin\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n/**\n * @file\n * API example for demuxing, decoding, filtering, encoding and muxing\n * @example transcoding.c\n */\n\n#include <libavcodec/avcodec.h>\n#include <libavformat/avformat.h>\n#include <libavfilter/buffersink.h>\n#include <libavfilter/buffersrc.h>\n#include <libavutil/opt.h>\n#include <libavutil/pixdesc.h>\n\nstatic AVFormatContext *ifmt_ctx;\nstatic AVFormatContext *ofmt_ctx;\ntypedef struct FilteringContext {\n    AVFilterContext *buffersink_ctx;\n    AVFilterContext *buffersrc_ctx;\n    AVFilterGraph *filter_graph;\n} FilteringContext;\nstatic FilteringContext *filter_ctx;\n\ntypedef struct StreamContext {\n    AVCodecContext *dec_ctx;\n    AVCodecContext *enc_ctx;\n} StreamContext;\nstatic StreamContext *stream_ctx;\n\nstatic int open_input_file(const char *filename)\n{\n    int ret;\n    unsigned int i;\n\n    ifmt_ctx = NULL;\n    if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot open input file\\n\");\n        return ret;\n    }\n\n    if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Cannot find stream information\\n\");\n        return ret;\n    }\n\n    stream_ctx = av_mallocz_array(ifmt_ctx->nb_streams, sizeof(*stream_ctx));\n    if (!stream_ctx)\n        return AVERROR(ENOMEM);\n\n    for (i = 0; i < ifmt_ctx->nb_streams; i++) {\n        AVStream *stream = ifmt_ctx->streams[i];\n        AVCodec *dec = avcodec_find_decoder(stream->codecpar->codec_id);\n        AVCodecContext *codec_ctx;\n        if (!dec) {\n            av_log(NULL, AV_LOG_ERROR, \"Failed to find decoder for stream #%u\\n\", i);\n            return AVERROR_DECODER_NOT_FOUND;\n        }\n        codec_ctx = avcodec_alloc_context3(dec);\n        if (!codec_ctx) {\n            av_log(NULL, AV_LOG_ERROR, \"Failed to allocate the decoder context for stream #%u\\n\", i);\n            return AVERROR(ENOMEM);\n        }\n        ret = avcodec_parameters_to_context(codec_ctx, stream->codecpar);\n        if (ret < 0) {\n            av_log(NULL, AV_LOG_ERROR, \"Failed to copy decoder parameters to input decoder context \"\n                   \"for stream #%u\\n\", i);\n            return ret;\n        }\n        /* Reencode video & audio and remux subtitles etc. */\n        if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO\n                || codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {\n            if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO)\n                codec_ctx->framerate = av_guess_frame_rate(ifmt_ctx, stream, NULL);\n            /* Open decoder */\n            ret = avcodec_open2(codec_ctx, dec, NULL);\n            if (ret < 0) {\n                av_log(NULL, AV_LOG_ERROR, \"Failed to open decoder for stream #%u\\n\", i);\n                return ret;\n            }\n        }\n        stream_ctx[i].dec_ctx = codec_ctx;\n    }\n\n    av_dump_format(ifmt_ctx, 0, filename, 0);\n    return 0;\n}\n\nstatic int open_output_file(const char *filename)\n{\n    AVStream *out_stream;\n    AVStream *in_stream;\n    AVCodecContext *dec_ctx, *enc_ctx;\n    AVCodec *encoder;\n    int ret;\n    unsigned int i;\n\n    ofmt_ctx = NULL;\n    avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, filename);\n    if (!ofmt_ctx) {\n        av_log(NULL, AV_LOG_ERROR, \"Could not create output context\\n\");\n        return AVERROR_UNKNOWN;\n    }\n\n\n    for (i = 0; i < ifmt_ctx->nb_streams; i++) {\n        out_stream = avformat_new_stream(ofmt_ctx, NULL);\n        if (!out_stream) {\n            av_log(NULL, AV_LOG_ERROR, \"Failed allocating output stream\\n\");\n            return AVERROR_UNKNOWN;\n        }\n\n        in_stream = ifmt_ctx->streams[i];\n        dec_ctx = stream_ctx[i].dec_ctx;\n\n        if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO\n                || dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {\n            /* in this example, we choose transcoding to same codec */\n            encoder = avcodec_find_encoder(dec_ctx->codec_id);\n            if (!encoder) {\n                av_log(NULL, AV_LOG_FATAL, \"Necessary encoder not found\\n\");\n                return AVERROR_INVALIDDATA;\n            }\n            enc_ctx = avcodec_alloc_context3(encoder);\n            if (!enc_ctx) {\n                av_log(NULL, AV_LOG_FATAL, \"Failed to allocate the encoder context\\n\");\n                return AVERROR(ENOMEM);\n            }\n\n            /* In this example, we transcode to same properties (picture size,\n             * sample rate etc.). These properties can be changed for output\n             * streams easily using filters */\n            if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {\n                enc_ctx->height = dec_ctx->height;\n                enc_ctx->width = dec_ctx->width;\n                enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio;\n                /* take first format from list of supported formats */\n                if (encoder->pix_fmts)\n                    enc_ctx->pix_fmt = encoder->pix_fmts[0];\n                else\n                    enc_ctx->pix_fmt = dec_ctx->pix_fmt;\n                /* video time_base can be set to whatever is handy and supported by encoder */\n                enc_ctx->time_base = av_inv_q(dec_ctx->framerate);\n            } else {\n                enc_ctx->sample_rate = dec_ctx->sample_rate;\n                enc_ctx->channel_layout = dec_ctx->channel_layout;\n                enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);\n                /* take first format from list of supported formats */\n                enc_ctx->sample_fmt = encoder->sample_fmts[0];\n                enc_ctx->time_base = (AVRational){1, enc_ctx->sample_rate};\n            }\n\n            /* Third parameter can be used to pass settings to encoder */\n            ret = avcodec_open2(enc_ctx, encoder, NULL);\n            if (ret < 0) {\n                av_log(NULL, AV_LOG_ERROR, \"Cannot open video encoder for stream #%u\\n\", i);\n                return ret;\n            }\n            ret = avcodec_parameters_from_context(out_stream->codecpar, enc_ctx);\n            if (ret < 0) {\n                av_log(NULL, AV_LOG_ERROR, \"Failed to copy encoder parameters to output stream #%u\\n\", i);\n                return ret;\n            }\n            if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)\n                enc_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;\n\n            out_stream->time_base = enc_ctx->time_base;\n            stream_ctx[i].enc_ctx = enc_ctx;\n        } else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) {\n            av_log(NULL, AV_LOG_FATAL, \"Elementary stream #%d is of unknown type, cannot proceed\\n\", i);\n            return AVERROR_INVALIDDATA;\n        } else {\n            /* if this stream must be remuxed */\n            ret = avcodec_parameters_copy(out_stream->codecpar, in_stream->codecpar);\n            if (ret < 0) {\n                av_log(NULL, AV_LOG_ERROR, \"Copying parameters for stream #%u failed\\n\", i);\n                return ret;\n            }\n            out_stream->time_base = in_stream->time_base;\n        }\n\n    }\n    av_dump_format(ofmt_ctx, 0, filename, 1);\n\n    if (!(ofmt_ctx->oformat->flags & AVFMT_NOFILE)) {\n        ret = avio_open(&ofmt_ctx->pb, filename, AVIO_FLAG_WRITE);\n        if (ret < 0) {\n            av_log(NULL, AV_LOG_ERROR, \"Could not open output file '%s'\", filename);\n            return ret;\n        }\n    }\n\n    /* init muxer, write output file header */\n    ret = avformat_write_header(ofmt_ctx, NULL);\n    if (ret < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Error occurred when opening output file\\n\");\n        return ret;\n    }\n\n    return 0;\n}\n\nstatic int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,\n        AVCodecContext *enc_ctx, const char *filter_spec)\n{\n    char args[512];\n    int ret = 0;\n    const AVFilter *buffersrc = NULL;\n    const AVFilter *buffersink = NULL;\n    AVFilterContext *buffersrc_ctx = NULL;\n    AVFilterContext *buffersink_ctx = NULL;\n    AVFilterInOut *outputs = avfilter_inout_alloc();\n    AVFilterInOut *inputs  = avfilter_inout_alloc();\n    AVFilterGraph *filter_graph = avfilter_graph_alloc();\n\n    if (!outputs || !inputs || !filter_graph) {\n        ret = AVERROR(ENOMEM);\n        goto end;\n    }\n\n    if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {\n        buffersrc = avfilter_get_by_name(\"buffer\");\n        buffersink = avfilter_get_by_name(\"buffersink\");\n        if (!buffersrc || !buffersink) {\n            av_log(NULL, AV_LOG_ERROR, \"filtering source or sink element not found\\n\");\n            ret = AVERROR_UNKNOWN;\n            goto end;\n        }\n\n        snprintf(args, sizeof(args),\n                \"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d\",\n                dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,\n                dec_ctx->time_base.num, dec_ctx->time_base.den,\n                dec_ctx->sample_aspect_ratio.num,\n                dec_ctx->sample_aspect_ratio.den);\n\n        ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, \"in\",\n                args, NULL, filter_graph);\n        if (ret < 0) {\n            av_log(NULL, AV_LOG_ERROR, \"Cannot create buffer source\\n\");\n            goto end;\n        }\n\n        ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, \"out\",\n                NULL, NULL, filter_graph);\n        if (ret < 0) {\n            av_log(NULL, AV_LOG_ERROR, \"Cannot create buffer sink\\n\");\n            goto end;\n        }\n\n        ret = av_opt_set_bin(buffersink_ctx, \"pix_fmts\",\n                (uint8_t*)&enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),\n                AV_OPT_SEARCH_CHILDREN);\n        if (ret < 0) {\n            av_log(NULL, AV_LOG_ERROR, \"Cannot set output pixel format\\n\");\n            goto end;\n        }\n    } else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {\n        buffersrc = avfilter_get_by_name(\"abuffer\");\n        buffersink = avfilter_get_by_name(\"abuffersink\");\n        if (!buffersrc || !buffersink) {\n            av_log(NULL, AV_LOG_ERROR, \"filtering source or sink element not found\\n\");\n            ret = AVERROR_UNKNOWN;\n            goto end;\n        }\n\n        if (!dec_ctx->channel_layout)\n            dec_ctx->channel_layout =\n                av_get_default_channel_layout(dec_ctx->channels);\n        snprintf(args, sizeof(args),\n                \"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%\"PRIx64,\n                dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate,\n                av_get_sample_fmt_name(dec_ctx->sample_fmt),\n                dec_ctx->channel_layout);\n        ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, \"in\",\n                args, NULL, filter_graph);\n        if (ret < 0) {\n            av_log(NULL, AV_LOG_ERROR, \"Cannot create audio buffer source\\n\");\n            goto end;\n        }\n\n        ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, \"out\",\n                NULL, NULL, filter_graph);\n        if (ret < 0) {\n            av_log(NULL, AV_LOG_ERROR, \"Cannot create audio buffer sink\\n\");\n            goto end;\n        }\n\n        ret = av_opt_set_bin(buffersink_ctx, \"sample_fmts\",\n                (uint8_t*)&enc_ctx->sample_fmt, sizeof(enc_ctx->sample_fmt),\n                AV_OPT_SEARCH_CHILDREN);\n        if (ret < 0) {\n            av_log(NULL, AV_LOG_ERROR, \"Cannot set output sample format\\n\");\n            goto end;\n        }\n\n        ret = av_opt_set_bin(buffersink_ctx, \"channel_layouts\",\n                (uint8_t*)&enc_ctx->channel_layout,\n                sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN);\n        if (ret < 0) {\n            av_log(NULL, AV_LOG_ERROR, \"Cannot set output channel layout\\n\");\n            goto end;\n        }\n\n        ret = av_opt_set_bin(buffersink_ctx, \"sample_rates\",\n                (uint8_t*)&enc_ctx->sample_rate, sizeof(enc_ctx->sample_rate),\n                AV_OPT_SEARCH_CHILDREN);\n        if (ret < 0) {\n            av_log(NULL, AV_LOG_ERROR, \"Cannot set output sample rate\\n\");\n            goto end;\n        }\n    } else {\n        ret = AVERROR_UNKNOWN;\n        goto end;\n    }\n\n    /* Endpoints for the filter graph. */\n    outputs->name       = av_strdup(\"in\");\n    outputs->filter_ctx = buffersrc_ctx;\n    outputs->pad_idx    = 0;\n    outputs->next       = NULL;\n\n    inputs->name       = av_strdup(\"out\");\n    inputs->filter_ctx = buffersink_ctx;\n    inputs->pad_idx    = 0;\n    inputs->next       = NULL;\n\n    if (!outputs->name || !inputs->name) {\n        ret = AVERROR(ENOMEM);\n        goto end;\n    }\n\n    if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,\n                    &inputs, &outputs, NULL)) < 0)\n        goto end;\n\n    if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)\n        goto end;\n\n    /* Fill FilteringContext */\n    fctx->buffersrc_ctx = buffersrc_ctx;\n    fctx->buffersink_ctx = buffersink_ctx;\n    fctx->filter_graph = filter_graph;\n\nend:\n    avfilter_inout_free(&inputs);\n    avfilter_inout_free(&outputs);\n\n    return ret;\n}\n\nstatic int init_filters(void)\n{\n    const char *filter_spec;\n    unsigned int i;\n    int ret;\n    filter_ctx = av_malloc_array(ifmt_ctx->nb_streams, sizeof(*filter_ctx));\n    if (!filter_ctx)\n        return AVERROR(ENOMEM);\n\n    for (i = 0; i < ifmt_ctx->nb_streams; i++) {\n        filter_ctx[i].buffersrc_ctx  = NULL;\n        filter_ctx[i].buffersink_ctx = NULL;\n        filter_ctx[i].filter_graph   = NULL;\n        if (!(ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO\n                || ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO))\n            continue;\n\n\n        if (ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)\n            filter_spec = \"null\"; /* passthrough (dummy) filter for video */\n        else\n            filter_spec = \"anull\"; /* passthrough (dummy) filter for audio */\n        ret = init_filter(&filter_ctx[i], stream_ctx[i].dec_ctx,\n                stream_ctx[i].enc_ctx, filter_spec);\n        if (ret)\n            return ret;\n    }\n    return 0;\n}\n\nstatic int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame) {\n    int ret;\n    int got_frame_local;\n    AVPacket enc_pkt;\n    int (*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) =\n        (ifmt_ctx->streams[stream_index]->codecpar->codec_type ==\n         AVMEDIA_TYPE_VIDEO) ? avcodec_encode_video2 : avcodec_encode_audio2;\n\n    if (!got_frame)\n        got_frame = &got_frame_local;\n\n    av_log(NULL, AV_LOG_INFO, \"Encoding frame\\n\");\n    /* encode filtered frame */\n    enc_pkt.data = NULL;\n    enc_pkt.size = 0;\n    av_init_packet(&enc_pkt);\n    ret = enc_func(stream_ctx[stream_index].enc_ctx, &enc_pkt,\n            filt_frame, got_frame);\n    av_frame_free(&filt_frame);\n    if (ret < 0)\n        return ret;\n    if (!(*got_frame))\n        return 0;\n\n    /* prepare packet for muxing */\n    enc_pkt.stream_index = stream_index;\n    av_packet_rescale_ts(&enc_pkt,\n                         stream_ctx[stream_index].enc_ctx->time_base,\n                         ofmt_ctx->streams[stream_index]->time_base);\n\n    av_log(NULL, AV_LOG_DEBUG, \"Muxing frame\\n\");\n    /* mux encoded frame */\n    ret = av_interleaved_write_frame(ofmt_ctx, &enc_pkt);\n    return ret;\n}\n\nstatic int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index)\n{\n    int ret;\n    AVFrame *filt_frame;\n\n    av_log(NULL, AV_LOG_INFO, \"Pushing decoded frame to filters\\n\");\n    /* push the decoded frame into the filtergraph */\n    ret = av_buffersrc_add_frame_flags(filter_ctx[stream_index].buffersrc_ctx,\n            frame, 0);\n    if (ret < 0) {\n        av_log(NULL, AV_LOG_ERROR, \"Error while feeding the filtergraph\\n\");\n        return ret;\n    }\n\n    /* pull filtered frames from the filtergraph */\n    while (1) {\n        filt_frame = av_frame_alloc();\n        if (!filt_frame) {\n            ret = AVERROR(ENOMEM);\n            break;\n        }\n        av_log(NULL, AV_LOG_INFO, \"Pulling filtered frame from filters\\n\");\n        ret = av_buffersink_get_frame(filter_ctx[stream_index].buffersink_ctx,\n                filt_frame);\n        if (ret < 0) {\n            /* if no more frames for output - returns AVERROR(EAGAIN)\n             * if flushed and no more frames for output - returns AVERROR_EOF\n             * rewrite retcode to 0 to show it as normal procedure completion\n             */\n            if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)\n                ret = 0;\n            av_frame_free(&filt_frame);\n            break;\n        }\n\n        filt_frame->pict_type = AV_PICTURE_TYPE_NONE;\n        ret = encode_write_frame(filt_frame, stream_index, NULL);\n        if (ret < 0)\n            break;\n    }\n\n    return ret;\n}\n\nstatic int flush_encoder(unsigned int stream_index)\n{\n    int ret;\n    int got_frame;\n\n    if (!(stream_ctx[stream_index].enc_ctx->codec->capabilities &\n                AV_CODEC_CAP_DELAY))\n        return 0;\n\n    while (1) {\n        av_log(NULL, AV_LOG_INFO, \"Flushing stream #%u encoder\\n\", stream_index);\n        ret = encode_write_frame(NULL, stream_index, &got_frame);\n        if (ret < 0)\n            break;\n        if (!got_frame)\n            return 0;\n    }\n    return ret;\n}\n\nint main(int argc, char **argv)\n{\n    int ret;\n    AVPacket packet = { .data = NULL, .size = 0 };\n    AVFrame *frame = NULL;\n    enum AVMediaType type;\n    unsigned int stream_index;\n    unsigned int i;\n    int got_frame;\n    int (*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);\n\n    if (argc != 3) {\n        av_log(NULL, AV_LOG_ERROR, \"Usage: %s <input file> <output file>\\n\", argv[0]);\n        return 1;\n    }\n\n    if ((ret = open_input_file(argv[1])) < 0)\n        goto end;\n    if ((ret = open_output_file(argv[2])) < 0)\n        goto end;\n    if ((ret = init_filters()) < 0)\n        goto end;\n\n    /* read all packets */\n    while (1) {\n        if ((ret = av_read_frame(ifmt_ctx, &packet)) < 0)\n            break;\n        stream_index = packet.stream_index;\n        type = ifmt_ctx->streams[packet.stream_index]->codecpar->codec_type;\n        av_log(NULL, AV_LOG_DEBUG, \"Demuxer gave frame of stream_index %u\\n\",\n                stream_index);\n\n        if (filter_ctx[stream_index].filter_graph) {\n            av_log(NULL, AV_LOG_DEBUG, \"Going to reencode&filter the frame\\n\");\n            frame = av_frame_alloc();\n            if (!frame) {\n                ret = AVERROR(ENOMEM);\n                break;\n            }\n            av_packet_rescale_ts(&packet,\n                                 ifmt_ctx->streams[stream_index]->time_base,\n                                 stream_ctx[stream_index].dec_ctx->time_base);\n            dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 :\n                avcodec_decode_audio4;\n            ret = dec_func(stream_ctx[stream_index].dec_ctx, frame,\n                    &got_frame, &packet);\n            if (ret < 0) {\n                av_frame_free(&frame);\n                av_log(NULL, AV_LOG_ERROR, \"Decoding failed\\n\");\n                break;\n            }\n\n            if (got_frame) {\n                frame->pts = frame->best_effort_timestamp;\n                ret = filter_encode_write_frame(frame, stream_index);\n                av_frame_free(&frame);\n                if (ret < 0)\n                    goto end;\n            } else {\n                av_frame_free(&frame);\n            }\n        } else {\n            /* remux this frame without reencoding */\n            av_packet_rescale_ts(&packet,\n                                 ifmt_ctx->streams[stream_index]->time_base,\n                                 ofmt_ctx->streams[stream_index]->time_base);\n\n            ret = av_interleaved_write_frame(ofmt_ctx, &packet);\n            if (ret < 0)\n                goto end;\n        }\n        av_packet_unref(&packet);\n    }\n\n    /* flush filters and encoders */\n    for (i = 0; i < ifmt_ctx->nb_streams; i++) {\n        /* flush filter */\n        if (!filter_ctx[i].filter_graph)\n            continue;\n        ret = filter_encode_write_frame(NULL, i);\n        if (ret < 0) {\n            av_log(NULL, AV_LOG_ERROR, \"Flushing filter failed\\n\");\n            goto end;\n        }\n\n        /* flush encoder */\n        ret = flush_encoder(i);\n        if (ret < 0) {\n            av_log(NULL, AV_LOG_ERROR, \"Flushing encoder failed\\n\");\n            goto end;\n        }\n    }\n\n    av_write_trailer(ofmt_ctx);\nend:\n    av_packet_unref(&packet);\n    av_frame_free(&frame);\n    for (i = 0; i < ifmt_ctx->nb_streams; i++) {\n        avcodec_free_context(&stream_ctx[i].dec_ctx);\n        if (ofmt_ctx && ofmt_ctx->nb_streams > i && ofmt_ctx->streams[i] && stream_ctx[i].enc_ctx)\n            avcodec_free_context(&stream_ctx[i].enc_ctx);\n        if (filter_ctx && filter_ctx[i].filter_graph)\n            avfilter_graph_free(&filter_ctx[i].filter_graph);\n    }\n    av_free(filter_ctx);\n    av_free(stream_ctx);\n    avformat_close_input(&ifmt_ctx);\n    if (ofmt_ctx && !(ofmt_ctx->oformat->flags & AVFMT_NOFILE))\n        avio_closep(&ofmt_ctx->pb);\n    avformat_free_context(ofmt_ctx);\n\n    if (ret < 0)\n        av_log(NULL, AV_LOG_ERROR, \"Error occurred: %s\\n\", av_err2str(ret));\n\n    return ret ? 1 : 0;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/vaapi_encode.c",
    "content": "/*\n * Video Acceleration API (video encoding) encode sample\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * Intel VAAPI-accelerated encoding example.\n *\n * @example vaapi_encode.c\n * This example shows how to do VAAPI-accelerated encoding. now only support NV12\n * raw file, usage like: vaapi_encode 1920 1080 input.yuv output.h264\n *\n */\n\n#include <stdio.h>\n#include <string.h>\n#include <errno.h>\n\n#include <libavcodec/avcodec.h>\n#include <libavutil/pixdesc.h>\n#include <libavutil/hwcontext.h>\n\nstatic int width, height;\nstatic AVBufferRef *hw_device_ctx = NULL;\n\nstatic int set_hwframe_ctx(AVCodecContext *ctx, AVBufferRef *hw_device_ctx)\n{\n    AVBufferRef *hw_frames_ref;\n    AVHWFramesContext *frames_ctx = NULL;\n    int err = 0;\n\n    if (!(hw_frames_ref = av_hwframe_ctx_alloc(hw_device_ctx))) {\n        fprintf(stderr, \"Failed to create VAAPI frame context.\\n\");\n        return -1;\n    }\n    frames_ctx = (AVHWFramesContext *)(hw_frames_ref->data);\n    frames_ctx->format    = AV_PIX_FMT_VAAPI;\n    frames_ctx->sw_format = AV_PIX_FMT_NV12;\n    frames_ctx->width     = width;\n    frames_ctx->height    = height;\n    frames_ctx->initial_pool_size = 20;\n    if ((err = av_hwframe_ctx_init(hw_frames_ref)) < 0) {\n        fprintf(stderr, \"Failed to initialize VAAPI frame context.\"\n                \"Error code: %s\\n\",av_err2str(err));\n        av_buffer_unref(&hw_frames_ref);\n        return err;\n    }\n    ctx->hw_frames_ctx = av_buffer_ref(hw_frames_ref);\n    if (!ctx->hw_frames_ctx)\n        err = AVERROR(ENOMEM);\n\n    av_buffer_unref(&hw_frames_ref);\n    return err;\n}\n\nstatic int encode_write(AVCodecContext *avctx, AVFrame *frame, FILE *fout)\n{\n    int ret = 0;\n    AVPacket enc_pkt;\n\n    av_init_packet(&enc_pkt);\n    enc_pkt.data = NULL;\n    enc_pkt.size = 0;\n\n    if ((ret = avcodec_send_frame(avctx, frame)) < 0) {\n        fprintf(stderr, \"Error code: %s\\n\", av_err2str(ret));\n        goto end;\n    }\n    while (1) {\n        ret = avcodec_receive_packet(avctx, &enc_pkt);\n        if (ret)\n            break;\n\n        enc_pkt.stream_index = 0;\n        ret = fwrite(enc_pkt.data, enc_pkt.size, 1, fout);\n        av_packet_unref(&enc_pkt);\n    }\n\nend:\n    ret = ((ret == AVERROR(EAGAIN)) ? 0 : -1);\n    return ret;\n}\n\nint main(int argc, char *argv[])\n{\n    int size, err;\n    FILE *fin = NULL, *fout = NULL;\n    AVFrame *sw_frame = NULL, *hw_frame = NULL;\n    AVCodecContext *avctx = NULL;\n    AVCodec *codec = NULL;\n    const char *enc_name = \"h264_vaapi\";\n\n    if (argc < 5) {\n        fprintf(stderr, \"Usage: %s <width> <height> <input file> <output file>\\n\", argv[0]);\n        return -1;\n    }\n\n    width  = atoi(argv[1]);\n    height = atoi(argv[2]);\n    size   = width * height;\n\n    if (!(fin = fopen(argv[3], \"r\"))) {\n        fprintf(stderr, \"Fail to open input file : %s\\n\", strerror(errno));\n        return -1;\n    }\n    if (!(fout = fopen(argv[4], \"w+b\"))) {\n        fprintf(stderr, \"Fail to open output file : %s\\n\", strerror(errno));\n        err = -1;\n        goto close;\n    }\n\n    err = av_hwdevice_ctx_create(&hw_device_ctx, AV_HWDEVICE_TYPE_VAAPI,\n                                 NULL, NULL, 0);\n    if (err < 0) {\n        fprintf(stderr, \"Failed to create a VAAPI device. Error code: %s\\n\", av_err2str(err));\n        goto close;\n    }\n\n    if (!(codec = avcodec_find_encoder_by_name(enc_name))) {\n        fprintf(stderr, \"Could not find encoder.\\n\");\n        err = -1;\n        goto close;\n    }\n\n    if (!(avctx = avcodec_alloc_context3(codec))) {\n        err = AVERROR(ENOMEM);\n        goto close;\n    }\n\n    avctx->width     = width;\n    avctx->height    = height;\n    avctx->time_base = (AVRational){1, 25};\n    avctx->framerate = (AVRational){25, 1};\n    avctx->sample_aspect_ratio = (AVRational){1, 1};\n    avctx->pix_fmt   = AV_PIX_FMT_VAAPI;\n\n    /* set hw_frames_ctx for encoder's AVCodecContext */\n    if ((err = set_hwframe_ctx(avctx, hw_device_ctx)) < 0) {\n        fprintf(stderr, \"Failed to set hwframe context.\\n\");\n        goto close;\n    }\n\n    if ((err = avcodec_open2(avctx, codec, NULL)) < 0) {\n        fprintf(stderr, \"Cannot open video encoder codec. Error code: %s\\n\", av_err2str(err));\n        goto close;\n    }\n\n    while (1) {\n        if (!(sw_frame = av_frame_alloc())) {\n            err = AVERROR(ENOMEM);\n            goto close;\n        }\n        /* read data into software frame, and transfer them into hw frame */\n        sw_frame->width  = width;\n        sw_frame->height = height;\n        sw_frame->format = AV_PIX_FMT_NV12;\n        if ((err = av_frame_get_buffer(sw_frame, 32)) < 0)\n            goto close;\n        if ((err = fread((uint8_t*)(sw_frame->data[0]), size, 1, fin)) <= 0)\n            break;\n        if ((err = fread((uint8_t*)(sw_frame->data[1]), size/2, 1, fin)) <= 0)\n            break;\n\n        if (!(hw_frame = av_frame_alloc())) {\n            err = AVERROR(ENOMEM);\n            goto close;\n        }\n        if ((err = av_hwframe_get_buffer(avctx->hw_frames_ctx, hw_frame, 0)) < 0) {\n            fprintf(stderr, \"Error code: %s.\\n\", av_err2str(err));\n            goto close;\n        }\n        if (!hw_frame->hw_frames_ctx) {\n            err = AVERROR(ENOMEM);\n            goto close;\n        }\n        if ((err = av_hwframe_transfer_data(hw_frame, sw_frame, 0)) < 0) {\n            fprintf(stderr, \"Error while transferring frame data to surface.\"\n                    \"Error code: %s.\\n\", av_err2str(err));\n            goto close;\n        }\n\n        if ((err = (encode_write(avctx, hw_frame, fout))) < 0) {\n            fprintf(stderr, \"Failed to encode.\\n\");\n            goto close;\n        }\n        av_frame_free(&hw_frame);\n        av_frame_free(&sw_frame);\n    }\n\n    /* flush encoder */\n    err = encode_write(avctx, NULL, fout);\n    if (err == AVERROR_EOF)\n        err = 0;\n\nclose:\n    if (fin)\n        fclose(fin);\n    if (fout)\n        fclose(fout);\n    av_frame_free(&sw_frame);\n    av_frame_free(&hw_frame);\n    avcodec_free_context(&avctx);\n    av_buffer_unref(&hw_device_ctx);\n\n    return err;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/examples/vaapi_transcode.c",
    "content": "/*\n * Video Acceleration API (video transcoding) transcode sample\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * Intel VAAPI-accelerated transcoding example.\n *\n * @example vaapi_transcode.c\n * This example shows how to do VAAPI-accelerated transcoding.\n * Usage: vaapi_transcode input_stream codec output_stream\n * e.g: - vaapi_transcode input.mp4 h264_vaapi output_h264.mp4\n *      - vaapi_transcode input.mp4 vp9_vaapi output_vp9.ivf\n */\n\n#include <stdio.h>\n#include <errno.h>\n\n#include <libavutil/hwcontext.h>\n#include <libavcodec/avcodec.h>\n#include <libavformat/avformat.h>\n\nstatic AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;\nstatic AVBufferRef *hw_device_ctx = NULL;\nstatic AVCodecContext *decoder_ctx = NULL, *encoder_ctx = NULL;\nstatic int video_stream = -1;\nstatic AVStream *ost;\nstatic int initialized = 0;\n\nstatic enum AVPixelFormat get_vaapi_format(AVCodecContext *ctx,\n                                           const enum AVPixelFormat *pix_fmts)\n{\n    const enum AVPixelFormat *p;\n\n    for (p = pix_fmts; *p != AV_PIX_FMT_NONE; p++) {\n        if (*p == AV_PIX_FMT_VAAPI)\n            return *p;\n    }\n\n    fprintf(stderr, \"Unable to decode this file using VA-API.\\n\");\n    return AV_PIX_FMT_NONE;\n}\n\nstatic int open_input_file(const char *filename)\n{\n    int ret;\n    AVCodec *decoder = NULL;\n    AVStream *video = NULL;\n\n    if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {\n        fprintf(stderr, \"Cannot open input file '%s', Error code: %s\\n\",\n                filename, av_err2str(ret));\n        return ret;\n    }\n\n    if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {\n        fprintf(stderr, \"Cannot find input stream information. Error code: %s\\n\",\n                av_err2str(ret));\n        return ret;\n    }\n\n    ret = av_find_best_stream(ifmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &decoder, 0);\n    if (ret < 0) {\n        fprintf(stderr, \"Cannot find a video stream in the input file. \"\n                \"Error code: %s\\n\", av_err2str(ret));\n        return ret;\n    }\n    video_stream = ret;\n\n    if (!(decoder_ctx = avcodec_alloc_context3(decoder)))\n        return AVERROR(ENOMEM);\n\n    video = ifmt_ctx->streams[video_stream];\n    if ((ret = avcodec_parameters_to_context(decoder_ctx, video->codecpar)) < 0) {\n        fprintf(stderr, \"avcodec_parameters_to_context error. Error code: %s\\n\",\n                av_err2str(ret));\n        return ret;\n    }\n\n    decoder_ctx->hw_device_ctx = av_buffer_ref(hw_device_ctx);\n    if (!decoder_ctx->hw_device_ctx) {\n        fprintf(stderr, \"A hardware device reference create failed.\\n\");\n        return AVERROR(ENOMEM);\n    }\n    decoder_ctx->get_format    = get_vaapi_format;\n\n    if ((ret = avcodec_open2(decoder_ctx, decoder, NULL)) < 0)\n        fprintf(stderr, \"Failed to open codec for decoding. Error code: %s\\n\",\n                av_err2str(ret));\n\n    return ret;\n}\n\nstatic int encode_write(AVFrame *frame)\n{\n    int ret = 0;\n    AVPacket enc_pkt;\n\n    av_init_packet(&enc_pkt);\n    enc_pkt.data = NULL;\n    enc_pkt.size = 0;\n\n    if ((ret = avcodec_send_frame(encoder_ctx, frame)) < 0) {\n        fprintf(stderr, \"Error during encoding. Error code: %s\\n\", av_err2str(ret));\n        goto end;\n    }\n    while (1) {\n        ret = avcodec_receive_packet(encoder_ctx, &enc_pkt);\n        if (ret)\n            break;\n\n        enc_pkt.stream_index = 0;\n        av_packet_rescale_ts(&enc_pkt, ifmt_ctx->streams[video_stream]->time_base,\n                             ofmt_ctx->streams[0]->time_base);\n        ret = av_interleaved_write_frame(ofmt_ctx, &enc_pkt);\n        if (ret < 0) {\n            fprintf(stderr, \"Error during writing data to output file. \"\n                    \"Error code: %s\\n\", av_err2str(ret));\n            return -1;\n        }\n    }\n\nend:\n    if (ret == AVERROR_EOF)\n        return 0;\n    ret = ((ret == AVERROR(EAGAIN)) ? 0:-1);\n    return ret;\n}\n\nstatic int dec_enc(AVPacket *pkt, AVCodec *enc_codec)\n{\n    AVFrame *frame;\n    int ret = 0;\n\n    ret = avcodec_send_packet(decoder_ctx, pkt);\n    if (ret < 0) {\n        fprintf(stderr, \"Error during decoding. Error code: %s\\n\", av_err2str(ret));\n        return ret;\n    }\n\n    while (ret >= 0) {\n        if (!(frame = av_frame_alloc()))\n            return AVERROR(ENOMEM);\n\n        ret = avcodec_receive_frame(decoder_ctx, frame);\n        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {\n            av_frame_free(&frame);\n            return 0;\n        } else if (ret < 0) {\n            fprintf(stderr, \"Error while decoding. Error code: %s\\n\", av_err2str(ret));\n            goto fail;\n        }\n\n        if (!initialized) {\n            /* we need to ref hw_frames_ctx of decoder to initialize encoder's codec.\n               Only after we get a decoded frame, can we obtain its hw_frames_ctx */\n            encoder_ctx->hw_frames_ctx = av_buffer_ref(decoder_ctx->hw_frames_ctx);\n            if (!encoder_ctx->hw_frames_ctx) {\n                ret = AVERROR(ENOMEM);\n                goto fail;\n            }\n            /* set AVCodecContext Parameters for encoder, here we keep them stay\n             * the same as decoder.\n             * xxx: now the the sample can't handle resolution change case.\n             */\n            encoder_ctx->time_base = av_inv_q(decoder_ctx->framerate);\n            encoder_ctx->pix_fmt   = AV_PIX_FMT_VAAPI;\n            encoder_ctx->width     = decoder_ctx->width;\n            encoder_ctx->height    = decoder_ctx->height;\n\n            if ((ret = avcodec_open2(encoder_ctx, enc_codec, NULL)) < 0) {\n                fprintf(stderr, \"Failed to open encode codec. Error code: %s\\n\",\n                        av_err2str(ret));\n                goto fail;\n            }\n\n            if (!(ost = avformat_new_stream(ofmt_ctx, enc_codec))) {\n                fprintf(stderr, \"Failed to allocate stream for output format.\\n\");\n                ret = AVERROR(ENOMEM);\n                goto fail;\n            }\n\n            ost->time_base = encoder_ctx->time_base;\n            ret = avcodec_parameters_from_context(ost->codecpar, encoder_ctx);\n            if (ret < 0) {\n                fprintf(stderr, \"Failed to copy the stream parameters. \"\n                        \"Error code: %s\\n\", av_err2str(ret));\n                goto fail;\n            }\n\n            /* write the stream header */\n            if ((ret = avformat_write_header(ofmt_ctx, NULL)) < 0) {\n                fprintf(stderr, \"Error while writing stream header. \"\n                        \"Error code: %s\\n\", av_err2str(ret));\n                goto fail;\n            }\n\n            initialized = 1;\n        }\n\n        if ((ret = encode_write(frame)) < 0)\n            fprintf(stderr, \"Error during encoding and writing.\\n\");\n\nfail:\n        av_frame_free(&frame);\n        if (ret < 0)\n            return ret;\n    }\n    return 0;\n}\n\nint main(int argc, char **argv)\n{\n    int ret = 0;\n    AVPacket dec_pkt;\n    AVCodec *enc_codec;\n\n    if (argc != 4) {\n        fprintf(stderr, \"Usage: %s <input file> <encode codec> <output file>\\n\"\n                \"The output format is guessed according to the file extension.\\n\"\n                \"\\n\", argv[0]);\n        return -1;\n    }\n\n    ret = av_hwdevice_ctx_create(&hw_device_ctx, AV_HWDEVICE_TYPE_VAAPI, NULL, NULL, 0);\n    if (ret < 0) {\n        fprintf(stderr, \"Failed to create a VAAPI device. Error code: %s\\n\", av_err2str(ret));\n        return -1;\n    }\n\n    if ((ret = open_input_file(argv[1])) < 0)\n        goto end;\n\n    if (!(enc_codec = avcodec_find_encoder_by_name(argv[2]))) {\n        fprintf(stderr, \"Could not find encoder '%s'\\n\", argv[2]);\n        ret = -1;\n        goto end;\n    }\n\n    if ((ret = (avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, argv[3]))) < 0) {\n        fprintf(stderr, \"Failed to deduce output format from file extension. Error code: \"\n                \"%s\\n\", av_err2str(ret));\n        goto end;\n    }\n\n    if (!(encoder_ctx = avcodec_alloc_context3(enc_codec))) {\n        ret = AVERROR(ENOMEM);\n        goto end;\n    }\n\n    ret = avio_open(&ofmt_ctx->pb, argv[3], AVIO_FLAG_WRITE);\n    if (ret < 0) {\n        fprintf(stderr, \"Cannot open output file. \"\n                \"Error code: %s\\n\", av_err2str(ret));\n        goto end;\n    }\n\n    /* read all packets and only transcoding video */\n    while (ret >= 0) {\n        if ((ret = av_read_frame(ifmt_ctx, &dec_pkt)) < 0)\n            break;\n\n        if (video_stream == dec_pkt.stream_index)\n            ret = dec_enc(&dec_pkt, enc_codec);\n\n        av_packet_unref(&dec_pkt);\n    }\n\n    /* flush decoder */\n    dec_pkt.data = NULL;\n    dec_pkt.size = 0;\n    ret = dec_enc(&dec_pkt, enc_codec);\n    av_packet_unref(&dec_pkt);\n\n    /* flush encoder */\n    ret = encode_write(NULL);\n\n    /* write the trailer for output stream */\n    av_write_trailer(ofmt_ctx);\n\nend:\n    avformat_close_input(&ifmt_ctx);\n    avformat_close_input(&ofmt_ctx);\n    avcodec_free_context(&decoder_ctx);\n    avcodec_free_context(&encoder_ctx);\n    av_buffer_unref(&hw_device_ctx);\n    return ret;\n}\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/ac3_parser.h",
    "content": "/*\n * AC-3 parser prototypes\n * Copyright (c) 2003 Fabrice Bellard\n * Copyright (c) 2003 Michael Niedermayer\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_AC3_PARSER_H\n#define AVCODEC_AC3_PARSER_H\n\n#include <stddef.h>\n#include <stdint.h>\n\n/**\n * Extract the bitstream ID and the frame size from AC-3 data.\n */\nint av_ac3_parse_header(const uint8_t *buf, size_t size,\n                        uint8_t *bitstream_id, uint16_t *frame_size);\n\n\n#endif /* AVCODEC_AC3_PARSER_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/adts_parser.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_ADTS_PARSER_H\n#define AVCODEC_ADTS_PARSER_H\n\n#include <stddef.h>\n#include <stdint.h>\n\n#define AV_AAC_ADTS_HEADER_SIZE 7\n\n/**\n * Extract the number of samples and frames from AAC data.\n * @param[in]  buf     pointer to AAC data buffer\n * @param[out] samples Pointer to where number of samples is written\n * @param[out] frames  Pointer to where number of frames is written\n * @return Returns 0 on success, error code on failure.\n */\nint av_adts_header_parse(const uint8_t *buf, uint32_t *samples,\n                         uint8_t *frames);\n\n#endif /* AVCODEC_ADTS_PARSER_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/avcodec.h",
    "content": "/*\n * copyright (c) 2001 Fabrice Bellard\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_AVCODEC_H\n#define AVCODEC_AVCODEC_H\n\n/**\n * @file\n * @ingroup libavc\n * Libavcodec external API header\n */\n\n#include <errno.h>\n#include \"libavutil/samplefmt.h\"\n#include \"libavutil/attributes.h\"\n#include \"libavutil/avutil.h\"\n#include \"libavutil/buffer.h\"\n#include \"libavutil/cpu.h\"\n#include \"libavutil/channel_layout.h\"\n#include \"libavutil/dict.h\"\n#include \"libavutil/frame.h\"\n#include \"libavutil/hwcontext.h\"\n#include \"libavutil/log.h\"\n#include \"libavutil/pixfmt.h\"\n#include \"libavutil/rational.h\"\n\n#include \"version.h\"\n\n/**\n * @defgroup libavc libavcodec\n * Encoding/Decoding Library\n *\n * @{\n *\n * @defgroup lavc_decoding Decoding\n * @{\n * @}\n *\n * @defgroup lavc_encoding Encoding\n * @{\n * @}\n *\n * @defgroup lavc_codec Codecs\n * @{\n * @defgroup lavc_codec_native Native Codecs\n * @{\n * @}\n * @defgroup lavc_codec_wrappers External library wrappers\n * @{\n * @}\n * @defgroup lavc_codec_hwaccel Hardware Accelerators bridge\n * @{\n * @}\n * @}\n * @defgroup lavc_internal Internal\n * @{\n * @}\n * @}\n */\n\n/**\n * @ingroup libavc\n * @defgroup lavc_encdec send/receive encoding and decoding API overview\n * @{\n *\n * The avcodec_send_packet()/avcodec_receive_frame()/avcodec_send_frame()/\n * avcodec_receive_packet() functions provide an encode/decode API, which\n * decouples input and output.\n *\n * The API is very similar for encoding/decoding and audio/video, and works as\n * follows:\n * - Set up and open the AVCodecContext as usual.\n * - Send valid input:\n *   - For decoding, call avcodec_send_packet() to give the decoder raw\n *     compressed data in an AVPacket.\n *   - For encoding, call avcodec_send_frame() to give the encoder an AVFrame\n *     containing uncompressed audio or video.\n *   In both cases, it is recommended that AVPackets and AVFrames are\n *   refcounted, or libavcodec might have to copy the input data. (libavformat\n *   always returns refcounted AVPackets, and av_frame_get_buffer() allocates\n *   refcounted AVFrames.)\n * - Receive output in a loop. Periodically call one of the avcodec_receive_*()\n *   functions and process their output:\n *   - For decoding, call avcodec_receive_frame(). On success, it will return\n *     an AVFrame containing uncompressed audio or video data.\n *   - For encoding, call avcodec_receive_packet(). On success, it will return\n *     an AVPacket with a compressed frame.\n *   Repeat this call until it returns AVERROR(EAGAIN) or an error. The\n *   AVERROR(EAGAIN) return value means that new input data is required to\n *   return new output. In this case, continue with sending input. For each\n *   input frame/packet, the codec will typically return 1 output frame/packet,\n *   but it can also be 0 or more than 1.\n *\n * At the beginning of decoding or encoding, the codec might accept multiple\n * input frames/packets without returning a frame, until its internal buffers\n * are filled. This situation is handled transparently if you follow the steps\n * outlined above.\n *\n * In theory, sending input can result in EAGAIN - this should happen only if\n * not all output was received. You can use this to structure alternative decode\n * or encode loops other than the one suggested above. For example, you could\n * try sending new input on each iteration, and try to receive output if that\n * returns EAGAIN.\n *\n * End of stream situations. These require \"flushing\" (aka draining) the codec,\n * as the codec might buffer multiple frames or packets internally for\n * performance or out of necessity (consider B-frames).\n * This is handled as follows:\n * - Instead of valid input, send NULL to the avcodec_send_packet() (decoding)\n *   or avcodec_send_frame() (encoding) functions. This will enter draining\n *   mode.\n * - Call avcodec_receive_frame() (decoding) or avcodec_receive_packet()\n *   (encoding) in a loop until AVERROR_EOF is returned. The functions will\n *   not return AVERROR(EAGAIN), unless you forgot to enter draining mode.\n * - Before decoding can be resumed again, the codec has to be reset with\n *   avcodec_flush_buffers().\n *\n * Using the API as outlined above is highly recommended. But it is also\n * possible to call functions outside of this rigid schema. For example, you can\n * call avcodec_send_packet() repeatedly without calling\n * avcodec_receive_frame(). In this case, avcodec_send_packet() will succeed\n * until the codec's internal buffer has been filled up (which is typically of\n * size 1 per output frame, after initial input), and then reject input with\n * AVERROR(EAGAIN). Once it starts rejecting input, you have no choice but to\n * read at least some output.\n *\n * Not all codecs will follow a rigid and predictable dataflow; the only\n * guarantee is that an AVERROR(EAGAIN) return value on a send/receive call on\n * one end implies that a receive/send call on the other end will succeed, or\n * at least will not fail with AVERROR(EAGAIN). In general, no codec will\n * permit unlimited buffering of input or output.\n *\n * This API replaces the following legacy functions:\n * - avcodec_decode_video2() and avcodec_decode_audio4():\n *   Use avcodec_send_packet() to feed input to the decoder, then use\n *   avcodec_receive_frame() to receive decoded frames after each packet.\n *   Unlike with the old video decoding API, multiple frames might result from\n *   a packet. For audio, splitting the input packet into frames by partially\n *   decoding packets becomes transparent to the API user. You never need to\n *   feed an AVPacket to the API twice (unless it is rejected with AVERROR(EAGAIN) - then\n *   no data was read from the packet).\n *   Additionally, sending a flush/draining packet is required only once.\n * - avcodec_encode_video2()/avcodec_encode_audio2():\n *   Use avcodec_send_frame() to feed input to the encoder, then use\n *   avcodec_receive_packet() to receive encoded packets.\n *   Providing user-allocated buffers for avcodec_receive_packet() is not\n *   possible.\n * - The new API does not handle subtitles yet.\n *\n * Mixing new and old function calls on the same AVCodecContext is not allowed,\n * and will result in undefined behavior.\n *\n * Some codecs might require using the new API; using the old API will return\n * an error when calling it. All codecs support the new API.\n *\n * A codec is not allowed to return AVERROR(EAGAIN) for both sending and receiving. This\n * would be an invalid state, which could put the codec user into an endless\n * loop. The API has no concept of time either: it cannot happen that trying to\n * do avcodec_send_packet() results in AVERROR(EAGAIN), but a repeated call 1 second\n * later accepts the packet (with no other receive/flush API calls involved).\n * The API is a strict state machine, and the passage of time is not supposed\n * to influence it. Some timing-dependent behavior might still be deemed\n * acceptable in certain cases. But it must never result in both send/receive\n * returning EAGAIN at the same time at any point. It must also absolutely be\n * avoided that the current state is \"unstable\" and can \"flip-flop\" between\n * the send/receive APIs allowing progress. For example, it's not allowed that\n * the codec randomly decides that it actually wants to consume a packet now\n * instead of returning a frame, after it just returned AVERROR(EAGAIN) on an\n * avcodec_send_packet() call.\n * @}\n */\n\n/**\n * @defgroup lavc_core Core functions/structures.\n * @ingroup libavc\n *\n * Basic definitions, functions for querying libavcodec capabilities,\n * allocating core structures, etc.\n * @{\n */\n\n\n/**\n * Identify the syntax and semantics of the bitstream.\n * The principle is roughly:\n * Two decoders with the same ID can decode the same streams.\n * Two encoders with the same ID can encode compatible streams.\n * There may be slight deviations from the principle due to implementation\n * details.\n *\n * If you add a codec ID to this list, add it so that\n * 1. no value of an existing codec ID changes (that would break ABI),\n * 2. it is as close as possible to similar codecs\n *\n * After adding new codec IDs, do not forget to add an entry to the codec\n * descriptor list and bump libavcodec minor version.\n */\nenum AVCodecID {\n    AV_CODEC_ID_NONE,\n\n    /* video codecs */\n    AV_CODEC_ID_MPEG1VIDEO,\n    AV_CODEC_ID_MPEG2VIDEO, ///< preferred ID for MPEG-1/2 video decoding\n    AV_CODEC_ID_H261,\n    AV_CODEC_ID_H263,\n    AV_CODEC_ID_RV10,\n    AV_CODEC_ID_RV20,\n    AV_CODEC_ID_MJPEG,\n    AV_CODEC_ID_MJPEGB,\n    AV_CODEC_ID_LJPEG,\n    AV_CODEC_ID_SP5X,\n    AV_CODEC_ID_JPEGLS,\n    AV_CODEC_ID_MPEG4,\n    AV_CODEC_ID_RAWVIDEO,\n    AV_CODEC_ID_MSMPEG4V1,\n    AV_CODEC_ID_MSMPEG4V2,\n    AV_CODEC_ID_MSMPEG4V3,\n    AV_CODEC_ID_WMV1,\n    AV_CODEC_ID_WMV2,\n    AV_CODEC_ID_H263P,\n    AV_CODEC_ID_H263I,\n    AV_CODEC_ID_FLV1,\n    AV_CODEC_ID_SVQ1,\n    AV_CODEC_ID_SVQ3,\n    AV_CODEC_ID_DVVIDEO,\n    AV_CODEC_ID_HUFFYUV,\n    AV_CODEC_ID_CYUV,\n    AV_CODEC_ID_H264,\n    AV_CODEC_ID_INDEO3,\n    AV_CODEC_ID_VP3,\n    AV_CODEC_ID_THEORA,\n    AV_CODEC_ID_ASV1,\n    AV_CODEC_ID_ASV2,\n    AV_CODEC_ID_FFV1,\n    AV_CODEC_ID_4XM,\n    AV_CODEC_ID_VCR1,\n    AV_CODEC_ID_CLJR,\n    AV_CODEC_ID_MDEC,\n    AV_CODEC_ID_ROQ,\n    AV_CODEC_ID_INTERPLAY_VIDEO,\n    AV_CODEC_ID_XAN_WC3,\n    AV_CODEC_ID_XAN_WC4,\n    AV_CODEC_ID_RPZA,\n    AV_CODEC_ID_CINEPAK,\n    AV_CODEC_ID_WS_VQA,\n    AV_CODEC_ID_MSRLE,\n    AV_CODEC_ID_MSVIDEO1,\n    AV_CODEC_ID_IDCIN,\n    AV_CODEC_ID_8BPS,\n    AV_CODEC_ID_SMC,\n    AV_CODEC_ID_FLIC,\n    AV_CODEC_ID_TRUEMOTION1,\n    AV_CODEC_ID_VMDVIDEO,\n    AV_CODEC_ID_MSZH,\n    AV_CODEC_ID_ZLIB,\n    AV_CODEC_ID_QTRLE,\n    AV_CODEC_ID_TSCC,\n    AV_CODEC_ID_ULTI,\n    AV_CODEC_ID_QDRAW,\n    AV_CODEC_ID_VIXL,\n    AV_CODEC_ID_QPEG,\n    AV_CODEC_ID_PNG,\n    AV_CODEC_ID_PPM,\n    AV_CODEC_ID_PBM,\n    AV_CODEC_ID_PGM,\n    AV_CODEC_ID_PGMYUV,\n    AV_CODEC_ID_PAM,\n    AV_CODEC_ID_FFVHUFF,\n    AV_CODEC_ID_RV30,\n    AV_CODEC_ID_RV40,\n    AV_CODEC_ID_VC1,\n    AV_CODEC_ID_WMV3,\n    AV_CODEC_ID_LOCO,\n    AV_CODEC_ID_WNV1,\n    AV_CODEC_ID_AASC,\n    AV_CODEC_ID_INDEO2,\n    AV_CODEC_ID_FRAPS,\n    AV_CODEC_ID_TRUEMOTION2,\n    AV_CODEC_ID_BMP,\n    AV_CODEC_ID_CSCD,\n    AV_CODEC_ID_MMVIDEO,\n    AV_CODEC_ID_ZMBV,\n    AV_CODEC_ID_AVS,\n    AV_CODEC_ID_SMACKVIDEO,\n    AV_CODEC_ID_NUV,\n    AV_CODEC_ID_KMVC,\n    AV_CODEC_ID_FLASHSV,\n    AV_CODEC_ID_CAVS,\n    AV_CODEC_ID_JPEG2000,\n    AV_CODEC_ID_VMNC,\n    AV_CODEC_ID_VP5,\n    AV_CODEC_ID_VP6,\n    AV_CODEC_ID_VP6F,\n    AV_CODEC_ID_TARGA,\n    AV_CODEC_ID_DSICINVIDEO,\n    AV_CODEC_ID_TIERTEXSEQVIDEO,\n    AV_CODEC_ID_TIFF,\n    AV_CODEC_ID_GIF,\n    AV_CODEC_ID_DXA,\n    AV_CODEC_ID_DNXHD,\n    AV_CODEC_ID_THP,\n    AV_CODEC_ID_SGI,\n    AV_CODEC_ID_C93,\n    AV_CODEC_ID_BETHSOFTVID,\n    AV_CODEC_ID_PTX,\n    AV_CODEC_ID_TXD,\n    AV_CODEC_ID_VP6A,\n    AV_CODEC_ID_AMV,\n    AV_CODEC_ID_VB,\n    AV_CODEC_ID_PCX,\n    AV_CODEC_ID_SUNRAST,\n    AV_CODEC_ID_INDEO4,\n    AV_CODEC_ID_INDEO5,\n    AV_CODEC_ID_MIMIC,\n    AV_CODEC_ID_RL2,\n    AV_CODEC_ID_ESCAPE124,\n    AV_CODEC_ID_DIRAC,\n    AV_CODEC_ID_BFI,\n    AV_CODEC_ID_CMV,\n    AV_CODEC_ID_MOTIONPIXELS,\n    AV_CODEC_ID_TGV,\n    AV_CODEC_ID_TGQ,\n    AV_CODEC_ID_TQI,\n    AV_CODEC_ID_AURA,\n    AV_CODEC_ID_AURA2,\n    AV_CODEC_ID_V210X,\n    AV_CODEC_ID_TMV,\n    AV_CODEC_ID_V210,\n    AV_CODEC_ID_DPX,\n    AV_CODEC_ID_MAD,\n    AV_CODEC_ID_FRWU,\n    AV_CODEC_ID_FLASHSV2,\n    AV_CODEC_ID_CDGRAPHICS,\n    AV_CODEC_ID_R210,\n    AV_CODEC_ID_ANM,\n    AV_CODEC_ID_BINKVIDEO,\n    AV_CODEC_ID_IFF_ILBM,\n#define AV_CODEC_ID_IFF_BYTERUN1 AV_CODEC_ID_IFF_ILBM\n    AV_CODEC_ID_KGV1,\n    AV_CODEC_ID_YOP,\n    AV_CODEC_ID_VP8,\n    AV_CODEC_ID_PICTOR,\n    AV_CODEC_ID_ANSI,\n    AV_CODEC_ID_A64_MULTI,\n    AV_CODEC_ID_A64_MULTI5,\n    AV_CODEC_ID_R10K,\n    AV_CODEC_ID_MXPEG,\n    AV_CODEC_ID_LAGARITH,\n    AV_CODEC_ID_PRORES,\n    AV_CODEC_ID_JV,\n    AV_CODEC_ID_DFA,\n    AV_CODEC_ID_WMV3IMAGE,\n    AV_CODEC_ID_VC1IMAGE,\n    AV_CODEC_ID_UTVIDEO,\n    AV_CODEC_ID_BMV_VIDEO,\n    AV_CODEC_ID_VBLE,\n    AV_CODEC_ID_DXTORY,\n    AV_CODEC_ID_V410,\n    AV_CODEC_ID_XWD,\n    AV_CODEC_ID_CDXL,\n    AV_CODEC_ID_XBM,\n    AV_CODEC_ID_ZEROCODEC,\n    AV_CODEC_ID_MSS1,\n    AV_CODEC_ID_MSA1,\n    AV_CODEC_ID_TSCC2,\n    AV_CODEC_ID_MTS2,\n    AV_CODEC_ID_CLLC,\n    AV_CODEC_ID_MSS2,\n    AV_CODEC_ID_VP9,\n    AV_CODEC_ID_AIC,\n    AV_CODEC_ID_ESCAPE130,\n    AV_CODEC_ID_G2M,\n    AV_CODEC_ID_WEBP,\n    AV_CODEC_ID_HNM4_VIDEO,\n    AV_CODEC_ID_HEVC,\n#define AV_CODEC_ID_H265 AV_CODEC_ID_HEVC\n    AV_CODEC_ID_FIC,\n    AV_CODEC_ID_ALIAS_PIX,\n    AV_CODEC_ID_BRENDER_PIX,\n    AV_CODEC_ID_PAF_VIDEO,\n    AV_CODEC_ID_EXR,\n    AV_CODEC_ID_VP7,\n    AV_CODEC_ID_SANM,\n    AV_CODEC_ID_SGIRLE,\n    AV_CODEC_ID_MVC1,\n    AV_CODEC_ID_MVC2,\n    AV_CODEC_ID_HQX,\n    AV_CODEC_ID_TDSC,\n    AV_CODEC_ID_HQ_HQA,\n    AV_CODEC_ID_HAP,\n    AV_CODEC_ID_DDS,\n    AV_CODEC_ID_DXV,\n    AV_CODEC_ID_SCREENPRESSO,\n    AV_CODEC_ID_RSCC,\n\n    AV_CODEC_ID_Y41P = 0x8000,\n    AV_CODEC_ID_AVRP,\n    AV_CODEC_ID_012V,\n    AV_CODEC_ID_AVUI,\n    AV_CODEC_ID_AYUV,\n    AV_CODEC_ID_TARGA_Y216,\n    AV_CODEC_ID_V308,\n    AV_CODEC_ID_V408,\n    AV_CODEC_ID_YUV4,\n    AV_CODEC_ID_AVRN,\n    AV_CODEC_ID_CPIA,\n    AV_CODEC_ID_XFACE,\n    AV_CODEC_ID_SNOW,\n    AV_CODEC_ID_SMVJPEG,\n    AV_CODEC_ID_APNG,\n    AV_CODEC_ID_DAALA,\n    AV_CODEC_ID_CFHD,\n    AV_CODEC_ID_TRUEMOTION2RT,\n    AV_CODEC_ID_M101,\n    AV_CODEC_ID_MAGICYUV,\n    AV_CODEC_ID_SHEERVIDEO,\n    AV_CODEC_ID_YLC,\n    AV_CODEC_ID_PSD,\n    AV_CODEC_ID_PIXLET,\n    AV_CODEC_ID_SPEEDHQ,\n    AV_CODEC_ID_FMVC,\n    AV_CODEC_ID_SCPR,\n    AV_CODEC_ID_CLEARVIDEO,\n    AV_CODEC_ID_XPM,\n    AV_CODEC_ID_AV1,\n    AV_CODEC_ID_BITPACKED,\n    AV_CODEC_ID_MSCC,\n    AV_CODEC_ID_SRGC,\n    AV_CODEC_ID_SVG,\n    AV_CODEC_ID_GDV,\n    AV_CODEC_ID_FITS,\n\n    /* various PCM \"codecs\" */\n    AV_CODEC_ID_FIRST_AUDIO = 0x10000,     ///< A dummy id pointing at the start of audio codecs\n    AV_CODEC_ID_PCM_S16LE = 0x10000,\n    AV_CODEC_ID_PCM_S16BE,\n    AV_CODEC_ID_PCM_U16LE,\n    AV_CODEC_ID_PCM_U16BE,\n    AV_CODEC_ID_PCM_S8,\n    AV_CODEC_ID_PCM_U8,\n    AV_CODEC_ID_PCM_MULAW,\n    AV_CODEC_ID_PCM_ALAW,\n    AV_CODEC_ID_PCM_S32LE,\n    AV_CODEC_ID_PCM_S32BE,\n    AV_CODEC_ID_PCM_U32LE,\n    AV_CODEC_ID_PCM_U32BE,\n    AV_CODEC_ID_PCM_S24LE,\n    AV_CODEC_ID_PCM_S24BE,\n    AV_CODEC_ID_PCM_U24LE,\n    AV_CODEC_ID_PCM_U24BE,\n    AV_CODEC_ID_PCM_S24DAUD,\n    AV_CODEC_ID_PCM_ZORK,\n    AV_CODEC_ID_PCM_S16LE_PLANAR,\n    AV_CODEC_ID_PCM_DVD,\n    AV_CODEC_ID_PCM_F32BE,\n    AV_CODEC_ID_PCM_F32LE,\n    AV_CODEC_ID_PCM_F64BE,\n    AV_CODEC_ID_PCM_F64LE,\n    AV_CODEC_ID_PCM_BLURAY,\n    AV_CODEC_ID_PCM_LXF,\n    AV_CODEC_ID_S302M,\n    AV_CODEC_ID_PCM_S8_PLANAR,\n    AV_CODEC_ID_PCM_S24LE_PLANAR,\n    AV_CODEC_ID_PCM_S32LE_PLANAR,\n    AV_CODEC_ID_PCM_S16BE_PLANAR,\n\n    AV_CODEC_ID_PCM_S64LE = 0x10800,\n    AV_CODEC_ID_PCM_S64BE,\n    AV_CODEC_ID_PCM_F16LE,\n    AV_CODEC_ID_PCM_F24LE,\n\n    /* various ADPCM codecs */\n    AV_CODEC_ID_ADPCM_IMA_QT = 0x11000,\n    AV_CODEC_ID_ADPCM_IMA_WAV,\n    AV_CODEC_ID_ADPCM_IMA_DK3,\n    AV_CODEC_ID_ADPCM_IMA_DK4,\n    AV_CODEC_ID_ADPCM_IMA_WS,\n    AV_CODEC_ID_ADPCM_IMA_SMJPEG,\n    AV_CODEC_ID_ADPCM_MS,\n    AV_CODEC_ID_ADPCM_4XM,\n    AV_CODEC_ID_ADPCM_XA,\n    AV_CODEC_ID_ADPCM_ADX,\n    AV_CODEC_ID_ADPCM_EA,\n    AV_CODEC_ID_ADPCM_G726,\n    AV_CODEC_ID_ADPCM_CT,\n    AV_CODEC_ID_ADPCM_SWF,\n    AV_CODEC_ID_ADPCM_YAMAHA,\n    AV_CODEC_ID_ADPCM_SBPRO_4,\n    AV_CODEC_ID_ADPCM_SBPRO_3,\n    AV_CODEC_ID_ADPCM_SBPRO_2,\n    AV_CODEC_ID_ADPCM_THP,\n    AV_CODEC_ID_ADPCM_IMA_AMV,\n    AV_CODEC_ID_ADPCM_EA_R1,\n    AV_CODEC_ID_ADPCM_EA_R3,\n    AV_CODEC_ID_ADPCM_EA_R2,\n    AV_CODEC_ID_ADPCM_IMA_EA_SEAD,\n    AV_CODEC_ID_ADPCM_IMA_EA_EACS,\n    AV_CODEC_ID_ADPCM_EA_XAS,\n    AV_CODEC_ID_ADPCM_EA_MAXIS_XA,\n    AV_CODEC_ID_ADPCM_IMA_ISS,\n    AV_CODEC_ID_ADPCM_G722,\n    AV_CODEC_ID_ADPCM_IMA_APC,\n    AV_CODEC_ID_ADPCM_VIMA,\n\n    AV_CODEC_ID_ADPCM_AFC = 0x11800,\n    AV_CODEC_ID_ADPCM_IMA_OKI,\n    AV_CODEC_ID_ADPCM_DTK,\n    AV_CODEC_ID_ADPCM_IMA_RAD,\n    AV_CODEC_ID_ADPCM_G726LE,\n    AV_CODEC_ID_ADPCM_THP_LE,\n    AV_CODEC_ID_ADPCM_PSX,\n    AV_CODEC_ID_ADPCM_AICA,\n    AV_CODEC_ID_ADPCM_IMA_DAT4,\n    AV_CODEC_ID_ADPCM_MTAF,\n\n    /* AMR */\n    AV_CODEC_ID_AMR_NB = 0x12000,\n    AV_CODEC_ID_AMR_WB,\n\n    /* RealAudio codecs*/\n    AV_CODEC_ID_RA_144 = 0x13000,\n    AV_CODEC_ID_RA_288,\n\n    /* various DPCM codecs */\n    AV_CODEC_ID_ROQ_DPCM = 0x14000,\n    AV_CODEC_ID_INTERPLAY_DPCM,\n    AV_CODEC_ID_XAN_DPCM,\n    AV_CODEC_ID_SOL_DPCM,\n\n    AV_CODEC_ID_SDX2_DPCM = 0x14800,\n    AV_CODEC_ID_GREMLIN_DPCM,\n\n    /* audio codecs */\n    AV_CODEC_ID_MP2 = 0x15000,\n    AV_CODEC_ID_MP3, ///< preferred ID for decoding MPEG audio layer 1, 2 or 3\n    AV_CODEC_ID_AAC,\n    AV_CODEC_ID_AC3,\n    AV_CODEC_ID_DTS,\n    AV_CODEC_ID_VORBIS,\n    AV_CODEC_ID_DVAUDIO,\n    AV_CODEC_ID_WMAV1,\n    AV_CODEC_ID_WMAV2,\n    AV_CODEC_ID_MACE3,\n    AV_CODEC_ID_MACE6,\n    AV_CODEC_ID_VMDAUDIO,\n    AV_CODEC_ID_FLAC,\n    AV_CODEC_ID_MP3ADU,\n    AV_CODEC_ID_MP3ON4,\n    AV_CODEC_ID_SHORTEN,\n    AV_CODEC_ID_ALAC,\n    AV_CODEC_ID_WESTWOOD_SND1,\n    AV_CODEC_ID_GSM, ///< as in Berlin toast format\n    AV_CODEC_ID_QDM2,\n    AV_CODEC_ID_COOK,\n    AV_CODEC_ID_TRUESPEECH,\n    AV_CODEC_ID_TTA,\n    AV_CODEC_ID_SMACKAUDIO,\n    AV_CODEC_ID_QCELP,\n    AV_CODEC_ID_WAVPACK,\n    AV_CODEC_ID_DSICINAUDIO,\n    AV_CODEC_ID_IMC,\n    AV_CODEC_ID_MUSEPACK7,\n    AV_CODEC_ID_MLP,\n    AV_CODEC_ID_GSM_MS, /* as found in WAV */\n    AV_CODEC_ID_ATRAC3,\n    AV_CODEC_ID_APE,\n    AV_CODEC_ID_NELLYMOSER,\n    AV_CODEC_ID_MUSEPACK8,\n    AV_CODEC_ID_SPEEX,\n    AV_CODEC_ID_WMAVOICE,\n    AV_CODEC_ID_WMAPRO,\n    AV_CODEC_ID_WMALOSSLESS,\n    AV_CODEC_ID_ATRAC3P,\n    AV_CODEC_ID_EAC3,\n    AV_CODEC_ID_SIPR,\n    AV_CODEC_ID_MP1,\n    AV_CODEC_ID_TWINVQ,\n    AV_CODEC_ID_TRUEHD,\n    AV_CODEC_ID_MP4ALS,\n    AV_CODEC_ID_ATRAC1,\n    AV_CODEC_ID_BINKAUDIO_RDFT,\n    AV_CODEC_ID_BINKAUDIO_DCT,\n    AV_CODEC_ID_AAC_LATM,\n    AV_CODEC_ID_QDMC,\n    AV_CODEC_ID_CELT,\n    AV_CODEC_ID_G723_1,\n    AV_CODEC_ID_G729,\n    AV_CODEC_ID_8SVX_EXP,\n    AV_CODEC_ID_8SVX_FIB,\n    AV_CODEC_ID_BMV_AUDIO,\n    AV_CODEC_ID_RALF,\n    AV_CODEC_ID_IAC,\n    AV_CODEC_ID_ILBC,\n    AV_CODEC_ID_OPUS,\n    AV_CODEC_ID_COMFORT_NOISE,\n    AV_CODEC_ID_TAK,\n    AV_CODEC_ID_METASOUND,\n    AV_CODEC_ID_PAF_AUDIO,\n    AV_CODEC_ID_ON2AVC,\n    AV_CODEC_ID_DSS_SP,\n    AV_CODEC_ID_CODEC2,\n\n    AV_CODEC_ID_FFWAVESYNTH = 0x15800,\n    AV_CODEC_ID_SONIC,\n    AV_CODEC_ID_SONIC_LS,\n    AV_CODEC_ID_EVRC,\n    AV_CODEC_ID_SMV,\n    AV_CODEC_ID_DSD_LSBF,\n    AV_CODEC_ID_DSD_MSBF,\n    AV_CODEC_ID_DSD_LSBF_PLANAR,\n    AV_CODEC_ID_DSD_MSBF_PLANAR,\n    AV_CODEC_ID_4GV,\n    AV_CODEC_ID_INTERPLAY_ACM,\n    AV_CODEC_ID_XMA1,\n    AV_CODEC_ID_XMA2,\n    AV_CODEC_ID_DST,\n    AV_CODEC_ID_ATRAC3AL,\n    AV_CODEC_ID_ATRAC3PAL,\n    AV_CODEC_ID_DOLBY_E,\n    AV_CODEC_ID_APTX,\n    AV_CODEC_ID_APTX_HD,\n    AV_CODEC_ID_SBC,\n\n    /* subtitle codecs */\n    AV_CODEC_ID_FIRST_SUBTITLE = 0x17000,          ///< A dummy ID pointing at the start of subtitle codecs.\n    AV_CODEC_ID_DVD_SUBTITLE = 0x17000,\n    AV_CODEC_ID_DVB_SUBTITLE,\n    AV_CODEC_ID_TEXT,  ///< raw UTF-8 text\n    AV_CODEC_ID_XSUB,\n    AV_CODEC_ID_SSA,\n    AV_CODEC_ID_MOV_TEXT,\n    AV_CODEC_ID_HDMV_PGS_SUBTITLE,\n    AV_CODEC_ID_DVB_TELETEXT,\n    AV_CODEC_ID_SRT,\n\n    AV_CODEC_ID_MICRODVD   = 0x17800,\n    AV_CODEC_ID_EIA_608,\n    AV_CODEC_ID_JACOSUB,\n    AV_CODEC_ID_SAMI,\n    AV_CODEC_ID_REALTEXT,\n    AV_CODEC_ID_STL,\n    AV_CODEC_ID_SUBVIEWER1,\n    AV_CODEC_ID_SUBVIEWER,\n    AV_CODEC_ID_SUBRIP,\n    AV_CODEC_ID_WEBVTT,\n    AV_CODEC_ID_MPL2,\n    AV_CODEC_ID_VPLAYER,\n    AV_CODEC_ID_PJS,\n    AV_CODEC_ID_ASS,\n    AV_CODEC_ID_HDMV_TEXT_SUBTITLE,\n\n    /* other specific kind of codecs (generally used for attachments) */\n    AV_CODEC_ID_FIRST_UNKNOWN = 0x18000,           ///< A dummy ID pointing at the start of various fake codecs.\n    AV_CODEC_ID_TTF = 0x18000,\n\n    AV_CODEC_ID_SCTE_35, ///< Contain timestamp estimated through PCR of program stream.\n    AV_CODEC_ID_BINTEXT    = 0x18800,\n    AV_CODEC_ID_XBIN,\n    AV_CODEC_ID_IDF,\n    AV_CODEC_ID_OTF,\n    AV_CODEC_ID_SMPTE_KLV,\n    AV_CODEC_ID_DVD_NAV,\n    AV_CODEC_ID_TIMED_ID3,\n    AV_CODEC_ID_BIN_DATA,\n\n\n    AV_CODEC_ID_PROBE = 0x19000, ///< codec_id is not known (like AV_CODEC_ID_NONE) but lavf should attempt to identify it\n\n    AV_CODEC_ID_MPEG2TS = 0x20000, /**< _FAKE_ codec to indicate a raw MPEG-2 TS\n                                * stream (only used by libavformat) */\n    AV_CODEC_ID_MPEG4SYSTEMS = 0x20001, /**< _FAKE_ codec to indicate a MPEG-4 Systems\n                                * stream (only used by libavformat) */\n    AV_CODEC_ID_FFMETADATA = 0x21000,   ///< Dummy codec for streams containing only metadata information.\n    AV_CODEC_ID_WRAPPED_AVFRAME = 0x21001, ///< Passthrough codec, AVFrames wrapped in AVPacket\n};\n\n/**\n * This struct describes the properties of a single codec described by an\n * AVCodecID.\n * @see avcodec_descriptor_get()\n */\ntypedef struct AVCodecDescriptor {\n    enum AVCodecID     id;\n    enum AVMediaType type;\n    /**\n     * Name of the codec described by this descriptor. It is non-empty and\n     * unique for each codec descriptor. It should contain alphanumeric\n     * characters and '_' only.\n     */\n    const char      *name;\n    /**\n     * A more descriptive name for this codec. May be NULL.\n     */\n    const char *long_name;\n    /**\n     * Codec properties, a combination of AV_CODEC_PROP_* flags.\n     */\n    int             props;\n    /**\n     * MIME type(s) associated with the codec.\n     * May be NULL; if not, a NULL-terminated array of MIME types.\n     * The first item is always non-NULL and is the preferred MIME type.\n     */\n    const char *const *mime_types;\n    /**\n     * If non-NULL, an array of profiles recognized for this codec.\n     * Terminated with FF_PROFILE_UNKNOWN.\n     */\n    const struct AVProfile *profiles;\n} AVCodecDescriptor;\n\n/**\n * Codec uses only intra compression.\n * Video and audio codecs only.\n */\n#define AV_CODEC_PROP_INTRA_ONLY    (1 << 0)\n/**\n * Codec supports lossy compression. Audio and video codecs only.\n * @note a codec may support both lossy and lossless\n * compression modes\n */\n#define AV_CODEC_PROP_LOSSY         (1 << 1)\n/**\n * Codec supports lossless compression. Audio and video codecs only.\n */\n#define AV_CODEC_PROP_LOSSLESS      (1 << 2)\n/**\n * Codec supports frame reordering. That is, the coded order (the order in which\n * the encoded packets are output by the encoders / stored / input to the\n * decoders) may be different from the presentation order of the corresponding\n * frames.\n *\n * For codecs that do not have this property set, PTS and DTS should always be\n * equal.\n */\n#define AV_CODEC_PROP_REORDER       (1 << 3)\n/**\n * Subtitle codec is bitmap based\n * Decoded AVSubtitle data can be read from the AVSubtitleRect->pict field.\n */\n#define AV_CODEC_PROP_BITMAP_SUB    (1 << 16)\n/**\n * Subtitle codec is text based.\n * Decoded AVSubtitle data can be read from the AVSubtitleRect->ass field.\n */\n#define AV_CODEC_PROP_TEXT_SUB      (1 << 17)\n\n/**\n * @ingroup lavc_decoding\n * Required number of additionally allocated bytes at the end of the input bitstream for decoding.\n * This is mainly needed because some optimized bitstream readers read\n * 32 or 64 bit at once and could read over the end.<br>\n * Note: If the first 23 bits of the additional bytes are not 0, then damaged\n * MPEG bitstreams could cause overread and segfault.\n */\n#define AV_INPUT_BUFFER_PADDING_SIZE 64\n\n/**\n * @ingroup lavc_encoding\n * minimum encoding buffer size\n * Used to avoid some checks during header writing.\n */\n#define AV_INPUT_BUFFER_MIN_SIZE 16384\n\n/**\n * @ingroup lavc_decoding\n */\nenum AVDiscard{\n    /* We leave some space between them for extensions (drop some\n     * keyframes for intra-only or drop just some bidir frames). */\n    AVDISCARD_NONE    =-16, ///< discard nothing\n    AVDISCARD_DEFAULT =  0, ///< discard useless packets like 0 size packets in avi\n    AVDISCARD_NONREF  =  8, ///< discard all non reference\n    AVDISCARD_BIDIR   = 16, ///< discard all bidirectional frames\n    AVDISCARD_NONINTRA= 24, ///< discard all non intra frames\n    AVDISCARD_NONKEY  = 32, ///< discard all frames except keyframes\n    AVDISCARD_ALL     = 48, ///< discard all\n};\n\nenum AVAudioServiceType {\n    AV_AUDIO_SERVICE_TYPE_MAIN              = 0,\n    AV_AUDIO_SERVICE_TYPE_EFFECTS           = 1,\n    AV_AUDIO_SERVICE_TYPE_VISUALLY_IMPAIRED = 2,\n    AV_AUDIO_SERVICE_TYPE_HEARING_IMPAIRED  = 3,\n    AV_AUDIO_SERVICE_TYPE_DIALOGUE          = 4,\n    AV_AUDIO_SERVICE_TYPE_COMMENTARY        = 5,\n    AV_AUDIO_SERVICE_TYPE_EMERGENCY         = 6,\n    AV_AUDIO_SERVICE_TYPE_VOICE_OVER        = 7,\n    AV_AUDIO_SERVICE_TYPE_KARAOKE           = 8,\n    AV_AUDIO_SERVICE_TYPE_NB                   , ///< Not part of ABI\n};\n\n/**\n * @ingroup lavc_encoding\n */\ntypedef struct RcOverride{\n    int start_frame;\n    int end_frame;\n    int qscale; // If this is 0 then quality_factor will be used instead.\n    float quality_factor;\n} RcOverride;\n\n/* encoding support\n   These flags can be passed in AVCodecContext.flags before initialization.\n   Note: Not everything is supported yet.\n*/\n\n/**\n * Allow decoders to produce frames with data planes that are not aligned\n * to CPU requirements (e.g. due to cropping).\n */\n#define AV_CODEC_FLAG_UNALIGNED       (1 <<  0)\n/**\n * Use fixed qscale.\n */\n#define AV_CODEC_FLAG_QSCALE          (1 <<  1)\n/**\n * 4 MV per MB allowed / advanced prediction for H.263.\n */\n#define AV_CODEC_FLAG_4MV             (1 <<  2)\n/**\n * Output even those frames that might be corrupted.\n */\n#define AV_CODEC_FLAG_OUTPUT_CORRUPT  (1 <<  3)\n/**\n * Use qpel MC.\n */\n#define AV_CODEC_FLAG_QPEL            (1 <<  4)\n/**\n * Use internal 2pass ratecontrol in first pass mode.\n */\n#define AV_CODEC_FLAG_PASS1           (1 <<  9)\n/**\n * Use internal 2pass ratecontrol in second pass mode.\n */\n#define AV_CODEC_FLAG_PASS2           (1 << 10)\n/**\n * loop filter.\n */\n#define AV_CODEC_FLAG_LOOP_FILTER     (1 << 11)\n/**\n * Only decode/encode grayscale.\n */\n#define AV_CODEC_FLAG_GRAY            (1 << 13)\n/**\n * error[?] variables will be set during encoding.\n */\n#define AV_CODEC_FLAG_PSNR            (1 << 15)\n/**\n * Input bitstream might be truncated at a random location\n * instead of only at frame boundaries.\n */\n#define AV_CODEC_FLAG_TRUNCATED       (1 << 16)\n/**\n * Use interlaced DCT.\n */\n#define AV_CODEC_FLAG_INTERLACED_DCT  (1 << 18)\n/**\n * Force low delay.\n */\n#define AV_CODEC_FLAG_LOW_DELAY       (1 << 19)\n/**\n * Place global headers in extradata instead of every keyframe.\n */\n#define AV_CODEC_FLAG_GLOBAL_HEADER   (1 << 22)\n/**\n * Use only bitexact stuff (except (I)DCT).\n */\n#define AV_CODEC_FLAG_BITEXACT        (1 << 23)\n/* Fx : Flag for H.263+ extra options */\n/**\n * H.263 advanced intra coding / MPEG-4 AC prediction\n */\n#define AV_CODEC_FLAG_AC_PRED         (1 << 24)\n/**\n * interlaced motion estimation\n */\n#define AV_CODEC_FLAG_INTERLACED_ME   (1 << 29)\n#define AV_CODEC_FLAG_CLOSED_GOP      (1U << 31)\n\n/**\n * Allow non spec compliant speedup tricks.\n */\n#define AV_CODEC_FLAG2_FAST           (1 <<  0)\n/**\n * Skip bitstream encoding.\n */\n#define AV_CODEC_FLAG2_NO_OUTPUT      (1 <<  2)\n/**\n * Place global headers at every keyframe instead of in extradata.\n */\n#define AV_CODEC_FLAG2_LOCAL_HEADER   (1 <<  3)\n\n/**\n * timecode is in drop frame format. DEPRECATED!!!!\n */\n#define AV_CODEC_FLAG2_DROP_FRAME_TIMECODE (1 << 13)\n\n/**\n * Input bitstream might be truncated at a packet boundaries\n * instead of only at frame boundaries.\n */\n#define AV_CODEC_FLAG2_CHUNKS         (1 << 15)\n/**\n * Discard cropping information from SPS.\n */\n#define AV_CODEC_FLAG2_IGNORE_CROP    (1 << 16)\n\n/**\n * Show all frames before the first keyframe\n */\n#define AV_CODEC_FLAG2_SHOW_ALL       (1 << 22)\n/**\n * Export motion vectors through frame side data\n */\n#define AV_CODEC_FLAG2_EXPORT_MVS     (1 << 28)\n/**\n * Do not skip samples and export skip information as frame side data\n */\n#define AV_CODEC_FLAG2_SKIP_MANUAL    (1 << 29)\n/**\n * Do not reset ASS ReadOrder field on flush (subtitles decoding)\n */\n#define AV_CODEC_FLAG2_RO_FLUSH_NOOP  (1 << 30)\n\n/* Unsupported options :\n *              Syntax Arithmetic coding (SAC)\n *              Reference Picture Selection\n *              Independent Segment Decoding */\n/* /Fx */\n/* codec capabilities */\n\n/**\n * Decoder can use draw_horiz_band callback.\n */\n#define AV_CODEC_CAP_DRAW_HORIZ_BAND     (1 <<  0)\n/**\n * Codec uses get_buffer() for allocating buffers and supports custom allocators.\n * If not set, it might not use get_buffer() at all or use operations that\n * assume the buffer was allocated by avcodec_default_get_buffer.\n */\n#define AV_CODEC_CAP_DR1                 (1 <<  1)\n#define AV_CODEC_CAP_TRUNCATED           (1 <<  3)\n/**\n * Encoder or decoder requires flushing with NULL input at the end in order to\n * give the complete and correct output.\n *\n * NOTE: If this flag is not set, the codec is guaranteed to never be fed with\n *       with NULL data. The user can still send NULL data to the public encode\n *       or decode function, but libavcodec will not pass it along to the codec\n *       unless this flag is set.\n *\n * Decoders:\n * The decoder has a non-zero delay and needs to be fed with avpkt->data=NULL,\n * avpkt->size=0 at the end to get the delayed data until the decoder no longer\n * returns frames.\n *\n * Encoders:\n * The encoder needs to be fed with NULL data at the end of encoding until the\n * encoder no longer returns data.\n *\n * NOTE: For encoders implementing the AVCodec.encode2() function, setting this\n *       flag also means that the encoder must set the pts and duration for\n *       each output packet. If this flag is not set, the pts and duration will\n *       be determined by libavcodec from the input frame.\n */\n#define AV_CODEC_CAP_DELAY               (1 <<  5)\n/**\n * Codec can be fed a final frame with a smaller size.\n * This can be used to prevent truncation of the last audio samples.\n */\n#define AV_CODEC_CAP_SMALL_LAST_FRAME    (1 <<  6)\n\n/**\n * Codec can output multiple frames per AVPacket\n * Normally demuxers return one frame at a time, demuxers which do not do\n * are connected to a parser to split what they return into proper frames.\n * This flag is reserved to the very rare category of codecs which have a\n * bitstream that cannot be split into frames without timeconsuming\n * operations like full decoding. Demuxers carrying such bitstreams thus\n * may return multiple frames in a packet. This has many disadvantages like\n * prohibiting stream copy in many cases thus it should only be considered\n * as a last resort.\n */\n#define AV_CODEC_CAP_SUBFRAMES           (1 <<  8)\n/**\n * Codec is experimental and is thus avoided in favor of non experimental\n * encoders\n */\n#define AV_CODEC_CAP_EXPERIMENTAL        (1 <<  9)\n/**\n * Codec should fill in channel configuration and samplerate instead of container\n */\n#define AV_CODEC_CAP_CHANNEL_CONF        (1 << 10)\n/**\n * Codec supports frame-level multithreading.\n */\n#define AV_CODEC_CAP_FRAME_THREADS       (1 << 12)\n/**\n * Codec supports slice-based (or partition-based) multithreading.\n */\n#define AV_CODEC_CAP_SLICE_THREADS       (1 << 13)\n/**\n * Codec supports changed parameters at any point.\n */\n#define AV_CODEC_CAP_PARAM_CHANGE        (1 << 14)\n/**\n * Codec supports avctx->thread_count == 0 (auto).\n */\n#define AV_CODEC_CAP_AUTO_THREADS        (1 << 15)\n/**\n * Audio encoder supports receiving a different number of samples in each call.\n */\n#define AV_CODEC_CAP_VARIABLE_FRAME_SIZE (1 << 16)\n/**\n * Decoder is not a preferred choice for probing.\n * This indicates that the decoder is not a good choice for probing.\n * It could for example be an expensive to spin up hardware decoder,\n * or it could simply not provide a lot of useful information about\n * the stream.\n * A decoder marked with this flag should only be used as last resort\n * choice for probing.\n */\n#define AV_CODEC_CAP_AVOID_PROBING       (1 << 17)\n/**\n * Codec is intra only.\n */\n#define AV_CODEC_CAP_INTRA_ONLY       0x40000000\n/**\n * Codec is lossless.\n */\n#define AV_CODEC_CAP_LOSSLESS         0x80000000\n\n/**\n * Codec is backed by a hardware implementation. Typically used to\n * identify a non-hwaccel hardware decoder. For information about hwaccels, use\n * avcodec_get_hw_config() instead.\n */\n#define AV_CODEC_CAP_HARDWARE            (1 << 18)\n\n/**\n * Codec is potentially backed by a hardware implementation, but not\n * necessarily. This is used instead of AV_CODEC_CAP_HARDWARE, if the\n * implementation provides some sort of internal fallback.\n */\n#define AV_CODEC_CAP_HYBRID              (1 << 19)\n\n/**\n * Pan Scan area.\n * This specifies the area which should be displayed.\n * Note there may be multiple such areas for one frame.\n */\ntypedef struct AVPanScan {\n    /**\n     * id\n     * - encoding: Set by user.\n     * - decoding: Set by libavcodec.\n     */\n    int id;\n\n    /**\n     * width and height in 1/16 pel\n     * - encoding: Set by user.\n     * - decoding: Set by libavcodec.\n     */\n    int width;\n    int height;\n\n    /**\n     * position of the top left corner in 1/16 pel for up to 3 fields/frames\n     * - encoding: Set by user.\n     * - decoding: Set by libavcodec.\n     */\n    int16_t position[3][2];\n} AVPanScan;\n\n/**\n * This structure describes the bitrate properties of an encoded bitstream. It\n * roughly corresponds to a subset the VBV parameters for MPEG-2 or HRD\n * parameters for H.264/HEVC.\n */\ntypedef struct AVCPBProperties {\n    /**\n     * Maximum bitrate of the stream, in bits per second.\n     * Zero if unknown or unspecified.\n     */\n    int max_bitrate;\n    /**\n     * Minimum bitrate of the stream, in bits per second.\n     * Zero if unknown or unspecified.\n     */\n    int min_bitrate;\n    /**\n     * Average bitrate of the stream, in bits per second.\n     * Zero if unknown or unspecified.\n     */\n    int avg_bitrate;\n\n    /**\n     * The size of the buffer to which the ratecontrol is applied, in bits.\n     * Zero if unknown or unspecified.\n     */\n    int buffer_size;\n\n    /**\n     * The delay between the time the packet this structure is associated with\n     * is received and the time when it should be decoded, in periods of a 27MHz\n     * clock.\n     *\n     * UINT64_MAX when unknown or unspecified.\n     */\n    uint64_t vbv_delay;\n} AVCPBProperties;\n\n/**\n * The decoder will keep a reference to the frame and may reuse it later.\n */\n#define AV_GET_BUFFER_FLAG_REF (1 << 0)\n\n/**\n * @defgroup lavc_packet AVPacket\n *\n * Types and functions for working with AVPacket.\n * @{\n */\nenum AVPacketSideDataType {\n    /**\n     * An AV_PKT_DATA_PALETTE side data packet contains exactly AVPALETTE_SIZE\n     * bytes worth of palette. This side data signals that a new palette is\n     * present.\n     */\n    AV_PKT_DATA_PALETTE,\n\n    /**\n     * The AV_PKT_DATA_NEW_EXTRADATA is used to notify the codec or the format\n     * that the extradata buffer was changed and the receiving side should\n     * act upon it appropriately. The new extradata is embedded in the side\n     * data buffer and should be immediately used for processing the current\n     * frame or packet.\n     */\n    AV_PKT_DATA_NEW_EXTRADATA,\n\n    /**\n     * An AV_PKT_DATA_PARAM_CHANGE side data packet is laid out as follows:\n     * @code\n     * u32le param_flags\n     * if (param_flags & AV_SIDE_DATA_PARAM_CHANGE_CHANNEL_COUNT)\n     *     s32le channel_count\n     * if (param_flags & AV_SIDE_DATA_PARAM_CHANGE_CHANNEL_LAYOUT)\n     *     u64le channel_layout\n     * if (param_flags & AV_SIDE_DATA_PARAM_CHANGE_SAMPLE_RATE)\n     *     s32le sample_rate\n     * if (param_flags & AV_SIDE_DATA_PARAM_CHANGE_DIMENSIONS)\n     *     s32le width\n     *     s32le height\n     * @endcode\n     */\n    AV_PKT_DATA_PARAM_CHANGE,\n\n    /**\n     * An AV_PKT_DATA_H263_MB_INFO side data packet contains a number of\n     * structures with info about macroblocks relevant to splitting the\n     * packet into smaller packets on macroblock edges (e.g. as for RFC 2190).\n     * That is, it does not necessarily contain info about all macroblocks,\n     * as long as the distance between macroblocks in the info is smaller\n     * than the target payload size.\n     * Each MB info structure is 12 bytes, and is laid out as follows:\n     * @code\n     * u32le bit offset from the start of the packet\n     * u8    current quantizer at the start of the macroblock\n     * u8    GOB number\n     * u16le macroblock address within the GOB\n     * u8    horizontal MV predictor\n     * u8    vertical MV predictor\n     * u8    horizontal MV predictor for block number 3\n     * u8    vertical MV predictor for block number 3\n     * @endcode\n     */\n    AV_PKT_DATA_H263_MB_INFO,\n\n    /**\n     * This side data should be associated with an audio stream and contains\n     * ReplayGain information in form of the AVReplayGain struct.\n     */\n    AV_PKT_DATA_REPLAYGAIN,\n\n    /**\n     * This side data contains a 3x3 transformation matrix describing an affine\n     * transformation that needs to be applied to the decoded video frames for\n     * correct presentation.\n     *\n     * See libavutil/display.h for a detailed description of the data.\n     */\n    AV_PKT_DATA_DISPLAYMATRIX,\n\n    /**\n     * This side data should be associated with a video stream and contains\n     * Stereoscopic 3D information in form of the AVStereo3D struct.\n     */\n    AV_PKT_DATA_STEREO3D,\n\n    /**\n     * This side data should be associated with an audio stream and corresponds\n     * to enum AVAudioServiceType.\n     */\n    AV_PKT_DATA_AUDIO_SERVICE_TYPE,\n\n    /**\n     * This side data contains quality related information from the encoder.\n     * @code\n     * u32le quality factor of the compressed frame. Allowed range is between 1 (good) and FF_LAMBDA_MAX (bad).\n     * u8    picture type\n     * u8    error count\n     * u16   reserved\n     * u64le[error count] sum of squared differences between encoder in and output\n     * @endcode\n     */\n    AV_PKT_DATA_QUALITY_STATS,\n\n    /**\n     * This side data contains an integer value representing the stream index\n     * of a \"fallback\" track.  A fallback track indicates an alternate\n     * track to use when the current track can not be decoded for some reason.\n     * e.g. no decoder available for codec.\n     */\n    AV_PKT_DATA_FALLBACK_TRACK,\n\n    /**\n     * This side data corresponds to the AVCPBProperties struct.\n     */\n    AV_PKT_DATA_CPB_PROPERTIES,\n\n    /**\n     * Recommmends skipping the specified number of samples\n     * @code\n     * u32le number of samples to skip from start of this packet\n     * u32le number of samples to skip from end of this packet\n     * u8    reason for start skip\n     * u8    reason for end   skip (0=padding silence, 1=convergence)\n     * @endcode\n     */\n    AV_PKT_DATA_SKIP_SAMPLES,\n\n    /**\n     * An AV_PKT_DATA_JP_DUALMONO side data packet indicates that\n     * the packet may contain \"dual mono\" audio specific to Japanese DTV\n     * and if it is true, recommends only the selected channel to be used.\n     * @code\n     * u8    selected channels (0=mail/left, 1=sub/right, 2=both)\n     * @endcode\n     */\n    AV_PKT_DATA_JP_DUALMONO,\n\n    /**\n     * A list of zero terminated key/value strings. There is no end marker for\n     * the list, so it is required to rely on the side data size to stop.\n     */\n    AV_PKT_DATA_STRINGS_METADATA,\n\n    /**\n     * Subtitle event position\n     * @code\n     * u32le x1\n     * u32le y1\n     * u32le x2\n     * u32le y2\n     * @endcode\n     */\n    AV_PKT_DATA_SUBTITLE_POSITION,\n\n    /**\n     * Data found in BlockAdditional element of matroska container. There is\n     * no end marker for the data, so it is required to rely on the side data\n     * size to recognize the end. 8 byte id (as found in BlockAddId) followed\n     * by data.\n     */\n    AV_PKT_DATA_MATROSKA_BLOCKADDITIONAL,\n\n    /**\n     * The optional first identifier line of a WebVTT cue.\n     */\n    AV_PKT_DATA_WEBVTT_IDENTIFIER,\n\n    /**\n     * The optional settings (rendering instructions) that immediately\n     * follow the timestamp specifier of a WebVTT cue.\n     */\n    AV_PKT_DATA_WEBVTT_SETTINGS,\n\n    /**\n     * A list of zero terminated key/value strings. There is no end marker for\n     * the list, so it is required to rely on the side data size to stop. This\n     * side data includes updated metadata which appeared in the stream.\n     */\n    AV_PKT_DATA_METADATA_UPDATE,\n\n    /**\n     * MPEGTS stream ID, this is required to pass the stream ID\n     * information from the demuxer to the corresponding muxer.\n     */\n    AV_PKT_DATA_MPEGTS_STREAM_ID,\n\n    /**\n     * Mastering display metadata (based on SMPTE-2086:2014). This metadata\n     * should be associated with a video stream and contains data in the form\n     * of the AVMasteringDisplayMetadata struct.\n     */\n    AV_PKT_DATA_MASTERING_DISPLAY_METADATA,\n\n    /**\n     * This side data should be associated with a video stream and corresponds\n     * to the AVSphericalMapping structure.\n     */\n    AV_PKT_DATA_SPHERICAL,\n\n    /**\n     * Content light level (based on CTA-861.3). This metadata should be\n     * associated with a video stream and contains data in the form of the\n     * AVContentLightMetadata struct.\n     */\n    AV_PKT_DATA_CONTENT_LIGHT_LEVEL,\n\n    /**\n     * ATSC A53 Part 4 Closed Captions. This metadata should be associated with\n     * a video stream. A53 CC bitstream is stored as uint8_t in AVPacketSideData.data.\n     * The number of bytes of CC data is AVPacketSideData.size.\n     */\n    AV_PKT_DATA_A53_CC,\n\n    /**\n     * This side data is encryption initialization data.\n     * The format is not part of ABI, use av_encryption_init_info_* methods to\n     * access.\n     */\n    AV_PKT_DATA_ENCRYPTION_INIT_INFO,\n\n    /**\n     * This side data contains encryption info for how to decrypt the packet.\n     * The format is not part of ABI, use av_encryption_info_* methods to access.\n     */\n    AV_PKT_DATA_ENCRYPTION_INFO,\n\n    /**\n     * The number of side data types.\n     * This is not part of the public API/ABI in the sense that it may\n     * change when new side data types are added.\n     * This must stay the last enum value.\n     * If its value becomes huge, some code using it\n     * needs to be updated as it assumes it to be smaller than other limits.\n     */\n    AV_PKT_DATA_NB\n};\n\n#define AV_PKT_DATA_QUALITY_FACTOR AV_PKT_DATA_QUALITY_STATS //DEPRECATED\n\ntypedef struct AVPacketSideData {\n    uint8_t *data;\n    int      size;\n    enum AVPacketSideDataType type;\n} AVPacketSideData;\n\n/**\n * This structure stores compressed data. It is typically exported by demuxers\n * and then passed as input to decoders, or received as output from encoders and\n * then passed to muxers.\n *\n * For video, it should typically contain one compressed frame. For audio it may\n * contain several compressed frames. Encoders are allowed to output empty\n * packets, with no compressed data, containing only side data\n * (e.g. to update some stream parameters at the end of encoding).\n *\n * AVPacket is one of the few structs in FFmpeg, whose size is a part of public\n * ABI. Thus it may be allocated on stack and no new fields can be added to it\n * without libavcodec and libavformat major bump.\n *\n * The semantics of data ownership depends on the buf field.\n * If it is set, the packet data is dynamically allocated and is\n * valid indefinitely until a call to av_packet_unref() reduces the\n * reference count to 0.\n *\n * If the buf field is not set av_packet_ref() would make a copy instead\n * of increasing the reference count.\n *\n * The side data is always allocated with av_malloc(), copied by\n * av_packet_ref() and freed by av_packet_unref().\n *\n * @see av_packet_ref\n * @see av_packet_unref\n */\ntypedef struct AVPacket {\n    /**\n     * A reference to the reference-counted buffer where the packet data is\n     * stored.\n     * May be NULL, then the packet data is not reference-counted.\n     */\n    AVBufferRef *buf;\n    /**\n     * Presentation timestamp in AVStream->time_base units; the time at which\n     * the decompressed packet will be presented to the user.\n     * Can be AV_NOPTS_VALUE if it is not stored in the file.\n     * pts MUST be larger or equal to dts as presentation cannot happen before\n     * decompression, unless one wants to view hex dumps. Some formats misuse\n     * the terms dts and pts/cts to mean something different. Such timestamps\n     * must be converted to true pts/dts before they are stored in AVPacket.\n     */\n    int64_t pts;\n    /**\n     * Decompression timestamp in AVStream->time_base units; the time at which\n     * the packet is decompressed.\n     * Can be AV_NOPTS_VALUE if it is not stored in the file.\n     */\n    int64_t dts;\n    uint8_t *data;\n    int   size;\n    int   stream_index;\n    /**\n     * A combination of AV_PKT_FLAG values\n     */\n    int   flags;\n    /**\n     * Additional packet data that can be provided by the container.\n     * Packet can contain several types of side information.\n     */\n    AVPacketSideData *side_data;\n    int side_data_elems;\n\n    /**\n     * Duration of this packet in AVStream->time_base units, 0 if unknown.\n     * Equals next_pts - this_pts in presentation order.\n     */\n    int64_t duration;\n\n    int64_t pos;                            ///< byte position in stream, -1 if unknown\n\n#if FF_API_CONVERGENCE_DURATION\n    /**\n     * @deprecated Same as the duration field, but as int64_t. This was required\n     * for Matroska subtitles, whose duration values could overflow when the\n     * duration field was still an int.\n     */\n    attribute_deprecated\n    int64_t convergence_duration;\n#endif\n} AVPacket;\n#define AV_PKT_FLAG_KEY     0x0001 ///< The packet contains a keyframe\n#define AV_PKT_FLAG_CORRUPT 0x0002 ///< The packet content is corrupted\n/**\n * Flag is used to discard packets which are required to maintain valid\n * decoder state but are not required for output and should be dropped\n * after decoding.\n **/\n#define AV_PKT_FLAG_DISCARD   0x0004\n/**\n * The packet comes from a trusted source.\n *\n * Otherwise-unsafe constructs such as arbitrary pointers to data\n * outside the packet may be followed.\n */\n#define AV_PKT_FLAG_TRUSTED   0x0008\n/**\n * Flag is used to indicate packets that contain frames that can\n * be discarded by the decoder.  I.e. Non-reference frames.\n */\n#define AV_PKT_FLAG_DISPOSABLE 0x0010\n\n\nenum AVSideDataParamChangeFlags {\n    AV_SIDE_DATA_PARAM_CHANGE_CHANNEL_COUNT  = 0x0001,\n    AV_SIDE_DATA_PARAM_CHANGE_CHANNEL_LAYOUT = 0x0002,\n    AV_SIDE_DATA_PARAM_CHANGE_SAMPLE_RATE    = 0x0004,\n    AV_SIDE_DATA_PARAM_CHANGE_DIMENSIONS     = 0x0008,\n};\n/**\n * @}\n */\n\nstruct AVCodecInternal;\n\nenum AVFieldOrder {\n    AV_FIELD_UNKNOWN,\n    AV_FIELD_PROGRESSIVE,\n    AV_FIELD_TT,          //< Top coded_first, top displayed first\n    AV_FIELD_BB,          //< Bottom coded first, bottom displayed first\n    AV_FIELD_TB,          //< Top coded first, bottom displayed first\n    AV_FIELD_BT,          //< Bottom coded first, top displayed first\n};\n\n/**\n * main external API structure.\n * New fields can be added to the end with minor version bumps.\n * Removal, reordering and changes to existing fields require a major\n * version bump.\n * You can use AVOptions (av_opt* / av_set/get*()) to access these fields from user\n * applications.\n * The name string for AVOptions options matches the associated command line\n * parameter name and can be found in libavcodec/options_table.h\n * The AVOption/command line parameter names differ in some cases from the C\n * structure field names for historic reasons or brevity.\n * sizeof(AVCodecContext) must not be used outside libav*.\n */\ntypedef struct AVCodecContext {\n    /**\n     * information on struct for av_log\n     * - set by avcodec_alloc_context3\n     */\n    const AVClass *av_class;\n    int log_level_offset;\n\n    enum AVMediaType codec_type; /* see AVMEDIA_TYPE_xxx */\n    const struct AVCodec  *codec;\n    enum AVCodecID     codec_id; /* see AV_CODEC_ID_xxx */\n\n    /**\n     * fourcc (LSB first, so \"ABCD\" -> ('D'<<24) + ('C'<<16) + ('B'<<8) + 'A').\n     * This is used to work around some encoder bugs.\n     * A demuxer should set this to what is stored in the field used to identify the codec.\n     * If there are multiple such fields in a container then the demuxer should choose the one\n     * which maximizes the information about the used codec.\n     * If the codec tag field in a container is larger than 32 bits then the demuxer should\n     * remap the longer ID to 32 bits with a table or other structure. Alternatively a new\n     * extra_codec_tag + size could be added but for this a clear advantage must be demonstrated\n     * first.\n     * - encoding: Set by user, if not then the default based on codec_id will be used.\n     * - decoding: Set by user, will be converted to uppercase by libavcodec during init.\n     */\n    unsigned int codec_tag;\n\n    void *priv_data;\n\n    /**\n     * Private context used for internal data.\n     *\n     * Unlike priv_data, this is not codec-specific. It is used in general\n     * libavcodec functions.\n     */\n    struct AVCodecInternal *internal;\n\n    /**\n     * Private data of the user, can be used to carry app specific stuff.\n     * - encoding: Set by user.\n     * - decoding: Set by user.\n     */\n    void *opaque;\n\n    /**\n     * the average bitrate\n     * - encoding: Set by user; unused for constant quantizer encoding.\n     * - decoding: Set by user, may be overwritten by libavcodec\n     *             if this info is available in the stream\n     */\n    int64_t bit_rate;\n\n    /**\n     * number of bits the bitstream is allowed to diverge from the reference.\n     *           the reference can be CBR (for CBR pass1) or VBR (for pass2)\n     * - encoding: Set by user; unused for constant quantizer encoding.\n     * - decoding: unused\n     */\n    int bit_rate_tolerance;\n\n    /**\n     * Global quality for codecs which cannot change it per frame.\n     * This should be proportional to MPEG-1/2/4 qscale.\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int global_quality;\n\n    /**\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int compression_level;\n#define FF_COMPRESSION_DEFAULT -1\n\n    /**\n     * AV_CODEC_FLAG_*.\n     * - encoding: Set by user.\n     * - decoding: Set by user.\n     */\n    int flags;\n\n    /**\n     * AV_CODEC_FLAG2_*\n     * - encoding: Set by user.\n     * - decoding: Set by user.\n     */\n    int flags2;\n\n    /**\n     * some codecs need / can use extradata like Huffman tables.\n     * MJPEG: Huffman tables\n     * rv10: additional flags\n     * MPEG-4: global headers (they can be in the bitstream or here)\n     * The allocated memory should be AV_INPUT_BUFFER_PADDING_SIZE bytes larger\n     * than extradata_size to avoid problems if it is read with the bitstream reader.\n     * The bytewise contents of extradata must not depend on the architecture or CPU endianness.\n     * - encoding: Set/allocated/freed by libavcodec.\n     * - decoding: Set/allocated/freed by user.\n     */\n    uint8_t *extradata;\n    int extradata_size;\n\n    /**\n     * This is the fundamental unit of time (in seconds) in terms\n     * of which frame timestamps are represented. For fixed-fps content,\n     * timebase should be 1/framerate and timestamp increments should be\n     * identically 1.\n     * This often, but not always is the inverse of the frame rate or field rate\n     * for video. 1/time_base is not the average frame rate if the frame rate is not\n     * constant.\n     *\n     * Like containers, elementary streams also can store timestamps, 1/time_base\n     * is the unit in which these timestamps are specified.\n     * As example of such codec time base see ISO/IEC 14496-2:2001(E)\n     * vop_time_increment_resolution and fixed_vop_rate\n     * (fixed_vop_rate == 0 implies that it is different from the framerate)\n     *\n     * - encoding: MUST be set by user.\n     * - decoding: the use of this field for decoding is deprecated.\n     *             Use framerate instead.\n     */\n    AVRational time_base;\n\n    /**\n     * For some codecs, the time base is closer to the field rate than the frame rate.\n     * Most notably, H.264 and MPEG-2 specify time_base as half of frame duration\n     * if no telecine is used ...\n     *\n     * Set to time_base ticks per frame. Default 1, e.g., H.264/MPEG-2 set it to 2.\n     */\n    int ticks_per_frame;\n\n    /**\n     * Codec delay.\n     *\n     * Encoding: Number of frames delay there will be from the encoder input to\n     *           the decoder output. (we assume the decoder matches the spec)\n     * Decoding: Number of frames delay in addition to what a standard decoder\n     *           as specified in the spec would produce.\n     *\n     * Video:\n     *   Number of frames the decoded output will be delayed relative to the\n     *   encoded input.\n     *\n     * Audio:\n     *   For encoding, this field is unused (see initial_padding).\n     *\n     *   For decoding, this is the number of samples the decoder needs to\n     *   output before the decoder's output is valid. When seeking, you should\n     *   start decoding this many samples prior to your desired seek point.\n     *\n     * - encoding: Set by libavcodec.\n     * - decoding: Set by libavcodec.\n     */\n    int delay;\n\n\n    /* video only */\n    /**\n     * picture width / height.\n     *\n     * @note Those fields may not match the values of the last\n     * AVFrame output by avcodec_decode_video2 due frame\n     * reordering.\n     *\n     * - encoding: MUST be set by user.\n     * - decoding: May be set by the user before opening the decoder if known e.g.\n     *             from the container. Some decoders will require the dimensions\n     *             to be set by the caller. During decoding, the decoder may\n     *             overwrite those values as required while parsing the data.\n     */\n    int width, height;\n\n    /**\n     * Bitstream width / height, may be different from width/height e.g. when\n     * the decoded frame is cropped before being output or lowres is enabled.\n     *\n     * @note Those field may not match the value of the last\n     * AVFrame output by avcodec_receive_frame() due frame\n     * reordering.\n     *\n     * - encoding: unused\n     * - decoding: May be set by the user before opening the decoder if known\n     *             e.g. from the container. During decoding, the decoder may\n     *             overwrite those values as required while parsing the data.\n     */\n    int coded_width, coded_height;\n\n    /**\n     * the number of pictures in a group of pictures, or 0 for intra_only\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int gop_size;\n\n    /**\n     * Pixel format, see AV_PIX_FMT_xxx.\n     * May be set by the demuxer if known from headers.\n     * May be overridden by the decoder if it knows better.\n     *\n     * @note This field may not match the value of the last\n     * AVFrame output by avcodec_receive_frame() due frame\n     * reordering.\n     *\n     * - encoding: Set by user.\n     * - decoding: Set by user if known, overridden by libavcodec while\n     *             parsing the data.\n     */\n    enum AVPixelFormat pix_fmt;\n\n    /**\n     * If non NULL, 'draw_horiz_band' is called by the libavcodec\n     * decoder to draw a horizontal band. It improves cache usage. Not\n     * all codecs can do that. You must check the codec capabilities\n     * beforehand.\n     * When multithreading is used, it may be called from multiple threads\n     * at the same time; threads might draw different parts of the same AVFrame,\n     * or multiple AVFrames, and there is no guarantee that slices will be drawn\n     * in order.\n     * The function is also used by hardware acceleration APIs.\n     * It is called at least once during frame decoding to pass\n     * the data needed for hardware render.\n     * In that mode instead of pixel data, AVFrame points to\n     * a structure specific to the acceleration API. The application\n     * reads the structure and can change some fields to indicate progress\n     * or mark state.\n     * - encoding: unused\n     * - decoding: Set by user.\n     * @param height the height of the slice\n     * @param y the y position of the slice\n     * @param type 1->top field, 2->bottom field, 3->frame\n     * @param offset offset into the AVFrame.data from which the slice should be read\n     */\n    void (*draw_horiz_band)(struct AVCodecContext *s,\n                            const AVFrame *src, int offset[AV_NUM_DATA_POINTERS],\n                            int y, int type, int height);\n\n    /**\n     * callback to negotiate the pixelFormat\n     * @param fmt is the list of formats which are supported by the codec,\n     * it is terminated by -1 as 0 is a valid format, the formats are ordered by quality.\n     * The first is always the native one.\n     * @note The callback may be called again immediately if initialization for\n     * the selected (hardware-accelerated) pixel format failed.\n     * @warning Behavior is undefined if the callback returns a value not\n     * in the fmt list of formats.\n     * @return the chosen format\n     * - encoding: unused\n     * - decoding: Set by user, if not set the native format will be chosen.\n     */\n    enum AVPixelFormat (*get_format)(struct AVCodecContext *s, const enum AVPixelFormat * fmt);\n\n    /**\n     * maximum number of B-frames between non-B-frames\n     * Note: The output will be delayed by max_b_frames+1 relative to the input.\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int max_b_frames;\n\n    /**\n     * qscale factor between IP and B-frames\n     * If > 0 then the last P-frame quantizer will be used (q= lastp_q*factor+offset).\n     * If < 0 then normal ratecontrol will be done (q= -normal_q*factor+offset).\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    float b_quant_factor;\n\n#if FF_API_PRIVATE_OPT\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int b_frame_strategy;\n#endif\n\n    /**\n     * qscale offset between IP and B-frames\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    float b_quant_offset;\n\n    /**\n     * Size of the frame reordering buffer in the decoder.\n     * For MPEG-2 it is 1 IPB or 0 low delay IP.\n     * - encoding: Set by libavcodec.\n     * - decoding: Set by libavcodec.\n     */\n    int has_b_frames;\n\n#if FF_API_PRIVATE_OPT\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int mpeg_quant;\n#endif\n\n    /**\n     * qscale factor between P- and I-frames\n     * If > 0 then the last P-frame quantizer will be used (q = lastp_q * factor + offset).\n     * If < 0 then normal ratecontrol will be done (q= -normal_q*factor+offset).\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    float i_quant_factor;\n\n    /**\n     * qscale offset between P and I-frames\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    float i_quant_offset;\n\n    /**\n     * luminance masking (0-> disabled)\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    float lumi_masking;\n\n    /**\n     * temporary complexity masking (0-> disabled)\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    float temporal_cplx_masking;\n\n    /**\n     * spatial complexity masking (0-> disabled)\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    float spatial_cplx_masking;\n\n    /**\n     * p block masking (0-> disabled)\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    float p_masking;\n\n    /**\n     * darkness masking (0-> disabled)\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    float dark_masking;\n\n    /**\n     * slice count\n     * - encoding: Set by libavcodec.\n     * - decoding: Set by user (or 0).\n     */\n    int slice_count;\n\n#if FF_API_PRIVATE_OPT\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n     int prediction_method;\n#define FF_PRED_LEFT   0\n#define FF_PRED_PLANE  1\n#define FF_PRED_MEDIAN 2\n#endif\n\n    /**\n     * slice offsets in the frame in bytes\n     * - encoding: Set/allocated by libavcodec.\n     * - decoding: Set/allocated by user (or NULL).\n     */\n    int *slice_offset;\n\n    /**\n     * sample aspect ratio (0 if unknown)\n     * That is the width of a pixel divided by the height of the pixel.\n     * Numerator and denominator must be relatively prime and smaller than 256 for some video standards.\n     * - encoding: Set by user.\n     * - decoding: Set by libavcodec.\n     */\n    AVRational sample_aspect_ratio;\n\n    /**\n     * motion estimation comparison function\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int me_cmp;\n    /**\n     * subpixel motion estimation comparison function\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int me_sub_cmp;\n    /**\n     * macroblock comparison function (not supported yet)\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int mb_cmp;\n    /**\n     * interlaced DCT comparison function\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int ildct_cmp;\n#define FF_CMP_SAD          0\n#define FF_CMP_SSE          1\n#define FF_CMP_SATD         2\n#define FF_CMP_DCT          3\n#define FF_CMP_PSNR         4\n#define FF_CMP_BIT          5\n#define FF_CMP_RD           6\n#define FF_CMP_ZERO         7\n#define FF_CMP_VSAD         8\n#define FF_CMP_VSSE         9\n#define FF_CMP_NSSE         10\n#define FF_CMP_W53          11\n#define FF_CMP_W97          12\n#define FF_CMP_DCTMAX       13\n#define FF_CMP_DCT264       14\n#define FF_CMP_MEDIAN_SAD   15\n#define FF_CMP_CHROMA       256\n\n    /**\n     * ME diamond size & shape\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int dia_size;\n\n    /**\n     * amount of previous MV predictors (2a+1 x 2a+1 square)\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int last_predictor_count;\n\n#if FF_API_PRIVATE_OPT\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int pre_me;\n#endif\n\n    /**\n     * motion estimation prepass comparison function\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int me_pre_cmp;\n\n    /**\n     * ME prepass diamond size & shape\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int pre_dia_size;\n\n    /**\n     * subpel ME quality\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int me_subpel_quality;\n\n    /**\n     * maximum motion estimation search range in subpel units\n     * If 0 then no limit.\n     *\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int me_range;\n\n    /**\n     * slice flags\n     * - encoding: unused\n     * - decoding: Set by user.\n     */\n    int slice_flags;\n#define SLICE_FLAG_CODED_ORDER    0x0001 ///< draw_horiz_band() is called in coded order instead of display\n#define SLICE_FLAG_ALLOW_FIELD    0x0002 ///< allow draw_horiz_band() with field slices (MPEG-2 field pics)\n#define SLICE_FLAG_ALLOW_PLANE    0x0004 ///< allow draw_horiz_band() with 1 component at a time (SVQ1)\n\n    /**\n     * macroblock decision mode\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int mb_decision;\n#define FF_MB_DECISION_SIMPLE 0        ///< uses mb_cmp\n#define FF_MB_DECISION_BITS   1        ///< chooses the one which needs the fewest bits\n#define FF_MB_DECISION_RD     2        ///< rate distortion\n\n    /**\n     * custom intra quantization matrix\n     * - encoding: Set by user, can be NULL.\n     * - decoding: Set by libavcodec.\n     */\n    uint16_t *intra_matrix;\n\n    /**\n     * custom inter quantization matrix\n     * - encoding: Set by user, can be NULL.\n     * - decoding: Set by libavcodec.\n     */\n    uint16_t *inter_matrix;\n\n#if FF_API_PRIVATE_OPT\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int scenechange_threshold;\n\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int noise_reduction;\n#endif\n\n    /**\n     * precision of the intra DC coefficient - 8\n     * - encoding: Set by user.\n     * - decoding: Set by libavcodec\n     */\n    int intra_dc_precision;\n\n    /**\n     * Number of macroblock rows at the top which are skipped.\n     * - encoding: unused\n     * - decoding: Set by user.\n     */\n    int skip_top;\n\n    /**\n     * Number of macroblock rows at the bottom which are skipped.\n     * - encoding: unused\n     * - decoding: Set by user.\n     */\n    int skip_bottom;\n\n    /**\n     * minimum MB Lagrange multiplier\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int mb_lmin;\n\n    /**\n     * maximum MB Lagrange multiplier\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int mb_lmax;\n\n#if FF_API_PRIVATE_OPT\n    /**\n     * @deprecated use encoder private options instead\n     */\n    attribute_deprecated\n    int me_penalty_compensation;\n#endif\n\n    /**\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int bidir_refine;\n\n#if FF_API_PRIVATE_OPT\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int brd_scale;\n#endif\n\n    /**\n     * minimum GOP size\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int keyint_min;\n\n    /**\n     * number of reference frames\n     * - encoding: Set by user.\n     * - decoding: Set by lavc.\n     */\n    int refs;\n\n#if FF_API_PRIVATE_OPT\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int chromaoffset;\n#endif\n\n    /**\n     * Note: Value depends upon the compare function used for fullpel ME.\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int mv0_threshold;\n\n#if FF_API_PRIVATE_OPT\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int b_sensitivity;\n#endif\n\n    /**\n     * Chromaticity coordinates of the source primaries.\n     * - encoding: Set by user\n     * - decoding: Set by libavcodec\n     */\n    enum AVColorPrimaries color_primaries;\n\n    /**\n     * Color Transfer Characteristic.\n     * - encoding: Set by user\n     * - decoding: Set by libavcodec\n     */\n    enum AVColorTransferCharacteristic color_trc;\n\n    /**\n     * YUV colorspace type.\n     * - encoding: Set by user\n     * - decoding: Set by libavcodec\n     */\n    enum AVColorSpace colorspace;\n\n    /**\n     * MPEG vs JPEG YUV range.\n     * - encoding: Set by user\n     * - decoding: Set by libavcodec\n     */\n    enum AVColorRange color_range;\n\n    /**\n     * This defines the location of chroma samples.\n     * - encoding: Set by user\n     * - decoding: Set by libavcodec\n     */\n    enum AVChromaLocation chroma_sample_location;\n\n    /**\n     * Number of slices.\n     * Indicates number of picture subdivisions. Used for parallelized\n     * decoding.\n     * - encoding: Set by user\n     * - decoding: unused\n     */\n    int slices;\n\n    /** Field order\n     * - encoding: set by libavcodec\n     * - decoding: Set by user.\n     */\n    enum AVFieldOrder field_order;\n\n    /* audio only */\n    int sample_rate; ///< samples per second\n    int channels;    ///< number of audio channels\n\n    /**\n     * audio sample format\n     * - encoding: Set by user.\n     * - decoding: Set by libavcodec.\n     */\n    enum AVSampleFormat sample_fmt;  ///< sample format\n\n    /* The following data should not be initialized. */\n    /**\n     * Number of samples per channel in an audio frame.\n     *\n     * - encoding: set by libavcodec in avcodec_open2(). Each submitted frame\n     *   except the last must contain exactly frame_size samples per channel.\n     *   May be 0 when the codec has AV_CODEC_CAP_VARIABLE_FRAME_SIZE set, then the\n     *   frame size is not restricted.\n     * - decoding: may be set by some decoders to indicate constant frame size\n     */\n    int frame_size;\n\n    /**\n     * Frame counter, set by libavcodec.\n     *\n     * - decoding: total number of frames returned from the decoder so far.\n     * - encoding: total number of frames passed to the encoder so far.\n     *\n     *   @note the counter is not incremented if encoding/decoding resulted in\n     *   an error.\n     */\n    int frame_number;\n\n    /**\n     * number of bytes per packet if constant and known or 0\n     * Used by some WAV based audio codecs.\n     */\n    int block_align;\n\n    /**\n     * Audio cutoff bandwidth (0 means \"automatic\")\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int cutoff;\n\n    /**\n     * Audio channel layout.\n     * - encoding: set by user.\n     * - decoding: set by user, may be overwritten by libavcodec.\n     */\n    uint64_t channel_layout;\n\n    /**\n     * Request decoder to use this channel layout if it can (0 for default)\n     * - encoding: unused\n     * - decoding: Set by user.\n     */\n    uint64_t request_channel_layout;\n\n    /**\n     * Type of service that the audio stream conveys.\n     * - encoding: Set by user.\n     * - decoding: Set by libavcodec.\n     */\n    enum AVAudioServiceType audio_service_type;\n\n    /**\n     * desired sample format\n     * - encoding: Not used.\n     * - decoding: Set by user.\n     * Decoder will decode to this format if it can.\n     */\n    enum AVSampleFormat request_sample_fmt;\n\n    /**\n     * This callback is called at the beginning of each frame to get data\n     * buffer(s) for it. There may be one contiguous buffer for all the data or\n     * there may be a buffer per each data plane or anything in between. What\n     * this means is, you may set however many entries in buf[] you feel necessary.\n     * Each buffer must be reference-counted using the AVBuffer API (see description\n     * of buf[] below).\n     *\n     * The following fields will be set in the frame before this callback is\n     * called:\n     * - format\n     * - width, height (video only)\n     * - sample_rate, channel_layout, nb_samples (audio only)\n     * Their values may differ from the corresponding values in\n     * AVCodecContext. This callback must use the frame values, not the codec\n     * context values, to calculate the required buffer size.\n     *\n     * This callback must fill the following fields in the frame:\n     * - data[]\n     * - linesize[]\n     * - extended_data:\n     *   * if the data is planar audio with more than 8 channels, then this\n     *     callback must allocate and fill extended_data to contain all pointers\n     *     to all data planes. data[] must hold as many pointers as it can.\n     *     extended_data must be allocated with av_malloc() and will be freed in\n     *     av_frame_unref().\n     *   * otherwise extended_data must point to data\n     * - buf[] must contain one or more pointers to AVBufferRef structures. Each of\n     *   the frame's data and extended_data pointers must be contained in these. That\n     *   is, one AVBufferRef for each allocated chunk of memory, not necessarily one\n     *   AVBufferRef per data[] entry. See: av_buffer_create(), av_buffer_alloc(),\n     *   and av_buffer_ref().\n     * - extended_buf and nb_extended_buf must be allocated with av_malloc() by\n     *   this callback and filled with the extra buffers if there are more\n     *   buffers than buf[] can hold. extended_buf will be freed in\n     *   av_frame_unref().\n     *\n     * If AV_CODEC_CAP_DR1 is not set then get_buffer2() must call\n     * avcodec_default_get_buffer2() instead of providing buffers allocated by\n     * some other means.\n     *\n     * Each data plane must be aligned to the maximum required by the target\n     * CPU.\n     *\n     * @see avcodec_default_get_buffer2()\n     *\n     * Video:\n     *\n     * If AV_GET_BUFFER_FLAG_REF is set in flags then the frame may be reused\n     * (read and/or written to if it is writable) later by libavcodec.\n     *\n     * avcodec_align_dimensions2() should be used to find the required width and\n     * height, as they normally need to be rounded up to the next multiple of 16.\n     *\n     * Some decoders do not support linesizes changing between frames.\n     *\n     * If frame multithreading is used and thread_safe_callbacks is set,\n     * this callback may be called from a different thread, but not from more\n     * than one at once. Does not need to be reentrant.\n     *\n     * @see avcodec_align_dimensions2()\n     *\n     * Audio:\n     *\n     * Decoders request a buffer of a particular size by setting\n     * AVFrame.nb_samples prior to calling get_buffer2(). The decoder may,\n     * however, utilize only part of the buffer by setting AVFrame.nb_samples\n     * to a smaller value in the output frame.\n     *\n     * As a convenience, av_samples_get_buffer_size() and\n     * av_samples_fill_arrays() in libavutil may be used by custom get_buffer2()\n     * functions to find the required data size and to fill data pointers and\n     * linesize. In AVFrame.linesize, only linesize[0] may be set for audio\n     * since all planes must be the same size.\n     *\n     * @see av_samples_get_buffer_size(), av_samples_fill_arrays()\n     *\n     * - encoding: unused\n     * - decoding: Set by libavcodec, user can override.\n     */\n    int (*get_buffer2)(struct AVCodecContext *s, AVFrame *frame, int flags);\n\n    /**\n     * If non-zero, the decoded audio and video frames returned from\n     * avcodec_decode_video2() and avcodec_decode_audio4() are reference-counted\n     * and are valid indefinitely. The caller must free them with\n     * av_frame_unref() when they are not needed anymore.\n     * Otherwise, the decoded frames must not be freed by the caller and are\n     * only valid until the next decode call.\n     *\n     * This is always automatically enabled if avcodec_receive_frame() is used.\n     *\n     * - encoding: unused\n     * - decoding: set by the caller before avcodec_open2().\n     */\n    attribute_deprecated\n    int refcounted_frames;\n\n    /* - encoding parameters */\n    float qcompress;  ///< amount of qscale change between easy & hard scenes (0.0-1.0)\n    float qblur;      ///< amount of qscale smoothing over time (0.0-1.0)\n\n    /**\n     * minimum quantizer\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int qmin;\n\n    /**\n     * maximum quantizer\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int qmax;\n\n    /**\n     * maximum quantizer difference between frames\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int max_qdiff;\n\n    /**\n     * decoder bitstream buffer size\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int rc_buffer_size;\n\n    /**\n     * ratecontrol override, see RcOverride\n     * - encoding: Allocated/set/freed by user.\n     * - decoding: unused\n     */\n    int rc_override_count;\n    RcOverride *rc_override;\n\n    /**\n     * maximum bitrate\n     * - encoding: Set by user.\n     * - decoding: Set by user, may be overwritten by libavcodec.\n     */\n    int64_t rc_max_rate;\n\n    /**\n     * minimum bitrate\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int64_t rc_min_rate;\n\n    /**\n     * Ratecontrol attempt to use, at maximum, <value> of what can be used without an underflow.\n     * - encoding: Set by user.\n     * - decoding: unused.\n     */\n    float rc_max_available_vbv_use;\n\n    /**\n     * Ratecontrol attempt to use, at least, <value> times the amount needed to prevent a vbv overflow.\n     * - encoding: Set by user.\n     * - decoding: unused.\n     */\n    float rc_min_vbv_overflow_use;\n\n    /**\n     * Number of bits which should be loaded into the rc buffer before decoding starts.\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int rc_initial_buffer_occupancy;\n\n#if FF_API_CODER_TYPE\n#define FF_CODER_TYPE_VLC       0\n#define FF_CODER_TYPE_AC        1\n#define FF_CODER_TYPE_RAW       2\n#define FF_CODER_TYPE_RLE       3\n    /**\n     * @deprecated use encoder private options instead\n     */\n    attribute_deprecated\n    int coder_type;\n#endif /* FF_API_CODER_TYPE */\n\n#if FF_API_PRIVATE_OPT\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int context_model;\n#endif\n\n#if FF_API_PRIVATE_OPT\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int frame_skip_threshold;\n\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int frame_skip_factor;\n\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int frame_skip_exp;\n\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int frame_skip_cmp;\n#endif /* FF_API_PRIVATE_OPT */\n\n    /**\n     * trellis RD quantization\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int trellis;\n\n#if FF_API_PRIVATE_OPT\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int min_prediction_order;\n\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int max_prediction_order;\n\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int64_t timecode_frame_start;\n#endif\n\n#if FF_API_RTP_CALLBACK\n    /**\n     * @deprecated unused\n     */\n    /* The RTP callback: This function is called    */\n    /* every time the encoder has a packet to send. */\n    /* It depends on the encoder if the data starts */\n    /* with a Start Code (it should). H.263 does.   */\n    /* mb_nb contains the number of macroblocks     */\n    /* encoded in the RTP payload.                  */\n    attribute_deprecated\n    void (*rtp_callback)(struct AVCodecContext *avctx, void *data, int size, int mb_nb);\n#endif\n\n#if FF_API_PRIVATE_OPT\n    /** @deprecated use encoder private options instead */\n    attribute_deprecated\n    int rtp_payload_size;   /* The size of the RTP payload: the coder will  */\n                            /* do its best to deliver a chunk with size     */\n                            /* below rtp_payload_size, the chunk will start */\n                            /* with a start code on some codecs like H.263. */\n                            /* This doesn't take account of any particular  */\n                            /* headers inside the transmitted RTP payload.  */\n#endif\n\n#if FF_API_STAT_BITS\n    /* statistics, used for 2-pass encoding */\n    attribute_deprecated\n    int mv_bits;\n    attribute_deprecated\n    int header_bits;\n    attribute_deprecated\n    int i_tex_bits;\n    attribute_deprecated\n    int p_tex_bits;\n    attribute_deprecated\n    int i_count;\n    attribute_deprecated\n    int p_count;\n    attribute_deprecated\n    int skip_count;\n    attribute_deprecated\n    int misc_bits;\n\n    /** @deprecated this field is unused */\n    attribute_deprecated\n    int frame_bits;\n#endif\n\n    /**\n     * pass1 encoding statistics output buffer\n     * - encoding: Set by libavcodec.\n     * - decoding: unused\n     */\n    char *stats_out;\n\n    /**\n     * pass2 encoding statistics input buffer\n     * Concatenated stuff from stats_out of pass1 should be placed here.\n     * - encoding: Allocated/set/freed by user.\n     * - decoding: unused\n     */\n    char *stats_in;\n\n    /**\n     * Work around bugs in encoders which sometimes cannot be detected automatically.\n     * - encoding: Set by user\n     * - decoding: Set by user\n     */\n    int workaround_bugs;\n#define FF_BUG_AUTODETECT       1  ///< autodetection\n#define FF_BUG_XVID_ILACE       4\n#define FF_BUG_UMP4             8\n#define FF_BUG_NO_PADDING       16\n#define FF_BUG_AMV              32\n#define FF_BUG_QPEL_CHROMA      64\n#define FF_BUG_STD_QPEL         128\n#define FF_BUG_QPEL_CHROMA2     256\n#define FF_BUG_DIRECT_BLOCKSIZE 512\n#define FF_BUG_EDGE             1024\n#define FF_BUG_HPEL_CHROMA      2048\n#define FF_BUG_DC_CLIP          4096\n#define FF_BUG_MS               8192 ///< Work around various bugs in Microsoft's broken decoders.\n#define FF_BUG_TRUNCATED       16384\n#define FF_BUG_IEDGE           32768\n\n    /**\n     * strictly follow the standard (MPEG-4, ...).\n     * - encoding: Set by user.\n     * - decoding: Set by user.\n     * Setting this to STRICT or higher means the encoder and decoder will\n     * generally do stupid things, whereas setting it to unofficial or lower\n     * will mean the encoder might produce output that is not supported by all\n     * spec-compliant decoders. Decoders don't differentiate between normal,\n     * unofficial and experimental (that is, they always try to decode things\n     * when they can) unless they are explicitly asked to behave stupidly\n     * (=strictly conform to the specs)\n     */\n    int strict_std_compliance;\n#define FF_COMPLIANCE_VERY_STRICT   2 ///< Strictly conform to an older more strict version of the spec or reference software.\n#define FF_COMPLIANCE_STRICT        1 ///< Strictly conform to all the things in the spec no matter what consequences.\n#define FF_COMPLIANCE_NORMAL        0\n#define FF_COMPLIANCE_UNOFFICIAL   -1 ///< Allow unofficial extensions\n#define FF_COMPLIANCE_EXPERIMENTAL -2 ///< Allow nonstandardized experimental things.\n\n    /**\n     * error concealment flags\n     * - encoding: unused\n     * - decoding: Set by user.\n     */\n    int error_concealment;\n#define FF_EC_GUESS_MVS   1\n#define FF_EC_DEBLOCK     2\n#define FF_EC_FAVOR_INTER 256\n\n    /**\n     * debug\n     * - encoding: Set by user.\n     * - decoding: Set by user.\n     */\n    int debug;\n#define FF_DEBUG_PICT_INFO   1\n#define FF_DEBUG_RC          2\n#define FF_DEBUG_BITSTREAM   4\n#define FF_DEBUG_MB_TYPE     8\n#define FF_DEBUG_QP          16\n#if FF_API_DEBUG_MV\n/**\n * @deprecated this option does nothing\n */\n#define FF_DEBUG_MV          32\n#endif\n#define FF_DEBUG_DCT_COEFF   0x00000040\n#define FF_DEBUG_SKIP        0x00000080\n#define FF_DEBUG_STARTCODE   0x00000100\n#define FF_DEBUG_ER          0x00000400\n#define FF_DEBUG_MMCO        0x00000800\n#define FF_DEBUG_BUGS        0x00001000\n#if FF_API_DEBUG_MV\n#define FF_DEBUG_VIS_QP      0x00002000\n#define FF_DEBUG_VIS_MB_TYPE 0x00004000\n#endif\n#define FF_DEBUG_BUFFERS     0x00008000\n#define FF_DEBUG_THREADS     0x00010000\n#define FF_DEBUG_GREEN_MD    0x00800000\n#define FF_DEBUG_NOMC        0x01000000\n\n#if FF_API_DEBUG_MV\n    /**\n     * debug\n     * - encoding: Set by user.\n     * - decoding: Set by user.\n     */\n    int debug_mv;\n#define FF_DEBUG_VIS_MV_P_FOR  0x00000001 // visualize forward predicted MVs of P-frames\n#define FF_DEBUG_VIS_MV_B_FOR  0x00000002 // visualize forward predicted MVs of B-frames\n#define FF_DEBUG_VIS_MV_B_BACK 0x00000004 // visualize backward predicted MVs of B-frames\n#endif\n\n    /**\n     * Error recognition; may misdetect some more or less valid parts as errors.\n     * - encoding: unused\n     * - decoding: Set by user.\n     */\n    int err_recognition;\n\n/**\n * Verify checksums embedded in the bitstream (could be of either encoded or\n * decoded data, depending on the codec) and print an error message on mismatch.\n * If AV_EF_EXPLODE is also set, a mismatching checksum will result in the\n * decoder returning an error.\n */\n#define AV_EF_CRCCHECK  (1<<0)\n#define AV_EF_BITSTREAM (1<<1)          ///< detect bitstream specification deviations\n#define AV_EF_BUFFER    (1<<2)          ///< detect improper bitstream length\n#define AV_EF_EXPLODE   (1<<3)          ///< abort decoding on minor error detection\n\n#define AV_EF_IGNORE_ERR (1<<15)        ///< ignore errors and continue\n#define AV_EF_CAREFUL    (1<<16)        ///< consider things that violate the spec, are fast to calculate and have not been seen in the wild as errors\n#define AV_EF_COMPLIANT  (1<<17)        ///< consider all spec non compliances as errors\n#define AV_EF_AGGRESSIVE (1<<18)        ///< consider things that a sane encoder should not do as an error\n\n\n    /**\n     * opaque 64-bit number (generally a PTS) that will be reordered and\n     * output in AVFrame.reordered_opaque\n     * - encoding: unused\n     * - decoding: Set by user.\n     */\n    int64_t reordered_opaque;\n\n    /**\n     * Hardware accelerator in use\n     * - encoding: unused.\n     * - decoding: Set by libavcodec\n     */\n    const struct AVHWAccel *hwaccel;\n\n    /**\n     * Hardware accelerator context.\n     * For some hardware accelerators, a global context needs to be\n     * provided by the user. In that case, this holds display-dependent\n     * data FFmpeg cannot instantiate itself. Please refer to the\n     * FFmpeg HW accelerator documentation to know how to fill this\n     * is. e.g. for VA API, this is a struct vaapi_context.\n     * - encoding: unused\n     * - decoding: Set by user\n     */\n    void *hwaccel_context;\n\n    /**\n     * error\n     * - encoding: Set by libavcodec if flags & AV_CODEC_FLAG_PSNR.\n     * - decoding: unused\n     */\n    uint64_t error[AV_NUM_DATA_POINTERS];\n\n    /**\n     * DCT algorithm, see FF_DCT_* below\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n    int dct_algo;\n#define FF_DCT_AUTO    0\n#define FF_DCT_FASTINT 1\n#define FF_DCT_INT     2\n#define FF_DCT_MMX     3\n#define FF_DCT_ALTIVEC 5\n#define FF_DCT_FAAN    6\n\n    /**\n     * IDCT algorithm, see FF_IDCT_* below.\n     * - encoding: Set by user.\n     * - decoding: Set by user.\n     */\n    int idct_algo;\n#define FF_IDCT_AUTO          0\n#define FF_IDCT_INT           1\n#define FF_IDCT_SIMPLE        2\n#define FF_IDCT_SIMPLEMMX     3\n#define FF_IDCT_ARM           7\n#define FF_IDCT_ALTIVEC       8\n#define FF_IDCT_SIMPLEARM     10\n#define FF_IDCT_XVID          14\n#define FF_IDCT_SIMPLEARMV5TE 16\n#define FF_IDCT_SIMPLEARMV6   17\n#define FF_IDCT_FAAN          20\n#define FF_IDCT_SIMPLENEON    22\n#define FF_IDCT_NONE          24 /* Used by XvMC to extract IDCT coefficients with FF_IDCT_PERM_NONE */\n#define FF_IDCT_SIMPLEAUTO    128\n\n    /**\n     * bits per sample/pixel from the demuxer (needed for huffyuv).\n     * - encoding: Set by libavcodec.\n     * - decoding: Set by user.\n     */\n     int bits_per_coded_sample;\n\n    /**\n     * Bits per sample/pixel of internal libavcodec pixel/sample format.\n     * - encoding: set by user.\n     * - decoding: set by libavcodec.\n     */\n    int bits_per_raw_sample;\n\n#if FF_API_LOWRES\n    /**\n     * low resolution decoding, 1-> 1/2 size, 2->1/4 size\n     * - encoding: unused\n     * - decoding: Set by user.\n     */\n     int lowres;\n#endif\n\n#if FF_API_CODED_FRAME\n    /**\n     * the picture in the bitstream\n     * - encoding: Set by libavcodec.\n     * - decoding: unused\n     *\n     * @deprecated use the quality factor packet side data instead\n     */\n    attribute_deprecated AVFrame *coded_frame;\n#endif\n\n    /**\n     * thread count\n     * is used to decide how many independent tasks should be passed to execute()\n     * - encoding: Set by user.\n     * - decoding: Set by user.\n     */\n    int thread_count;\n\n    /**\n     * Which multithreading methods to use.\n     * Use of FF_THREAD_FRAME will increase decoding delay by one frame per thread,\n     * so clients which cannot provide future frames should not use it.\n     *\n     * - encoding: Set by user, otherwise the default is used.\n     * - decoding: Set by user, otherwise the default is used.\n     */\n    int thread_type;\n#define FF_THREAD_FRAME   1 ///< Decode more than one frame at once\n#define FF_THREAD_SLICE   2 ///< Decode more than one part of a single frame at once\n\n    /**\n     * Which multithreading methods are in use by the codec.\n     * - encoding: Set by libavcodec.\n     * - decoding: Set by libavcodec.\n     */\n    int active_thread_type;\n\n    /**\n     * Set by the client if its custom get_buffer() callback can be called\n     * synchronously from another thread, which allows faster multithreaded decoding.\n     * draw_horiz_band() will be called from other threads regardless of this setting.\n     * Ignored if the default get_buffer() is used.\n     * - encoding: Set by user.\n     * - decoding: Set by user.\n     */\n    int thread_safe_callbacks;\n\n    /**\n     * The codec may call this to execute several independent things.\n     * It will return only after finishing all tasks.\n     * The user may replace this with some multithreaded implementation,\n     * the default implementation will execute the parts serially.\n     * @param count the number of things to execute\n     * - encoding: Set by libavcodec, user can override.\n     * - decoding: Set by libavcodec, user can override.\n     */\n    int (*execute)(struct AVCodecContext *c, int (*func)(struct AVCodecContext *c2, void *arg), void *arg2, int *ret, int count, int size);\n\n    /**\n     * The codec may call this to execute several independent things.\n     * It will return only after finishing all tasks.\n     * The user may replace this with some multithreaded implementation,\n     * the default implementation will execute the parts serially.\n     * Also see avcodec_thread_init and e.g. the --enable-pthread configure option.\n     * @param c context passed also to func\n     * @param count the number of things to execute\n     * @param arg2 argument passed unchanged to func\n     * @param ret return values of executed functions, must have space for \"count\" values. May be NULL.\n     * @param func function that will be called count times, with jobnr from 0 to count-1.\n     *             threadnr will be in the range 0 to c->thread_count-1 < MAX_THREADS and so that no\n     *             two instances of func executing at the same time will have the same threadnr.\n     * @return always 0 currently, but code should handle a future improvement where when any call to func\n     *         returns < 0 no further calls to func may be done and < 0 is returned.\n     * - encoding: Set by libavcodec, user can override.\n     * - decoding: Set by libavcodec, user can override.\n     */\n    int (*execute2)(struct AVCodecContext *c, int (*func)(struct AVCodecContext *c2, void *arg, int jobnr, int threadnr), void *arg2, int *ret, int count);\n\n    /**\n     * noise vs. sse weight for the nsse comparison function\n     * - encoding: Set by user.\n     * - decoding: unused\n     */\n     int nsse_weight;\n\n    /**\n     * profile\n     * - encoding: Set by user.\n     * - decoding: Set by libavcodec.\n     */\n     int profile;\n#define FF_PROFILE_UNKNOWN -99\n#define FF_PROFILE_RESERVED -100\n\n#define FF_PROFILE_AAC_MAIN 0\n#define FF_PROFILE_AAC_LOW  1\n#define FF_PROFILE_AAC_SSR  2\n#define FF_PROFILE_AAC_LTP  3\n#define FF_PROFILE_AAC_HE   4\n#define FF_PROFILE_AAC_HE_V2 28\n#define FF_PROFILE_AAC_LD   22\n#define FF_PROFILE_AAC_ELD  38\n#define FF_PROFILE_MPEG2_AAC_LOW 128\n#define FF_PROFILE_MPEG2_AAC_HE  131\n\n#define FF_PROFILE_DNXHD         0\n#define FF_PROFILE_DNXHR_LB      1\n#define FF_PROFILE_DNXHR_SQ      2\n#define FF_PROFILE_DNXHR_HQ      3\n#define FF_PROFILE_DNXHR_HQX     4\n#define FF_PROFILE_DNXHR_444     5\n\n#define FF_PROFILE_DTS         20\n#define FF_PROFILE_DTS_ES      30\n#define FF_PROFILE_DTS_96_24   40\n#define FF_PROFILE_DTS_HD_HRA  50\n#define FF_PROFILE_DTS_HD_MA   60\n#define FF_PROFILE_DTS_EXPRESS 70\n\n#define FF_PROFILE_MPEG2_422    0\n#define FF_PROFILE_MPEG2_HIGH   1\n#define FF_PROFILE_MPEG2_SS     2\n#define FF_PROFILE_MPEG2_SNR_SCALABLE  3\n#define FF_PROFILE_MPEG2_MAIN   4\n#define FF_PROFILE_MPEG2_SIMPLE 5\n\n#define FF_PROFILE_H264_CONSTRAINED  (1<<9)  // 8+1; constraint_set1_flag\n#define FF_PROFILE_H264_INTRA        (1<<11) // 8+3; constraint_set3_flag\n\n#define FF_PROFILE_H264_BASELINE             66\n#define FF_PROFILE_H264_CONSTRAINED_BASELINE (66|FF_PROFILE_H264_CONSTRAINED)\n#define FF_PROFILE_H264_MAIN                 77\n#define FF_PROFILE_H264_EXTENDED             88\n#define FF_PROFILE_H264_HIGH                 100\n#define FF_PROFILE_H264_HIGH_10              110\n#define FF_PROFILE_H264_HIGH_10_INTRA        (110|FF_PROFILE_H264_INTRA)\n#define FF_PROFILE_H264_MULTIVIEW_HIGH       118\n#define FF_PROFILE_H264_HIGH_422             122\n#define FF_PROFILE_H264_HIGH_422_INTRA       (122|FF_PROFILE_H264_INTRA)\n#define FF_PROFILE_H264_STEREO_HIGH          128\n#define FF_PROFILE_H264_HIGH_444             144\n#define FF_PROFILE_H264_HIGH_444_PREDICTIVE  244\n#define FF_PROFILE_H264_HIGH_444_INTRA       (244|FF_PROFILE_H264_INTRA)\n#define FF_PROFILE_H264_CAVLC_444            44\n\n#define FF_PROFILE_VC1_SIMPLE   0\n#define FF_PROFILE_VC1_MAIN     1\n#define FF_PROFILE_VC1_COMPLEX  2\n#define FF_PROFILE_VC1_ADVANCED 3\n\n#define FF_PROFILE_MPEG4_SIMPLE                     0\n#define FF_PROFILE_MPEG4_SIMPLE_SCALABLE            1\n#define FF_PROFILE_MPEG4_CORE                       2\n#define FF_PROFILE_MPEG4_MAIN                       3\n#define FF_PROFILE_MPEG4_N_BIT                      4\n#define FF_PROFILE_MPEG4_SCALABLE_TEXTURE           5\n#define FF_PROFILE_MPEG4_SIMPLE_FACE_ANIMATION      6\n#define FF_PROFILE_MPEG4_BASIC_ANIMATED_TEXTURE     7\n#define FF_PROFILE_MPEG4_HYBRID                     8\n#define FF_PROFILE_MPEG4_ADVANCED_REAL_TIME         9\n#define FF_PROFILE_MPEG4_CORE_SCALABLE             10\n#define FF_PROFILE_MPEG4_ADVANCED_CODING           11\n#define FF_PROFILE_MPEG4_ADVANCED_CORE             12\n#define FF_PROFILE_MPEG4_ADVANCED_SCALABLE_TEXTURE 13\n#define FF_PROFILE_MPEG4_SIMPLE_STUDIO             14\n#define FF_PROFILE_MPEG4_ADVANCED_SIMPLE           15\n\n#define FF_PROFILE_JPEG2000_CSTREAM_RESTRICTION_0   1\n#define FF_PROFILE_JPEG2000_CSTREAM_RESTRICTION_1   2\n#define FF_PROFILE_JPEG2000_CSTREAM_NO_RESTRICTION  32768\n#define FF_PROFILE_JPEG2000_DCINEMA_2K              3\n#define FF_PROFILE_JPEG2000_DCINEMA_4K              4\n\n#define FF_PROFILE_VP9_0                            0\n#define FF_PROFILE_VP9_1                            1\n#define FF_PROFILE_VP9_2                            2\n#define FF_PROFILE_VP9_3                            3\n\n#define FF_PROFILE_HEVC_MAIN                        1\n#define FF_PROFILE_HEVC_MAIN_10                     2\n#define FF_PROFILE_HEVC_MAIN_STILL_PICTURE          3\n#define FF_PROFILE_HEVC_REXT                        4\n\n#define FF_PROFILE_AV1_MAIN                         0\n#define FF_PROFILE_AV1_HIGH                         1\n#define FF_PROFILE_AV1_PROFESSIONAL                 2\n\n#define FF_PROFILE_MJPEG_HUFFMAN_BASELINE_DCT            0xc0\n#define FF_PROFILE_MJPEG_HUFFMAN_EXTENDED_SEQUENTIAL_DCT 0xc1\n#define FF_PROFILE_MJPEG_HUFFMAN_PROGRESSIVE_DCT         0xc2\n#define FF_PROFILE_MJPEG_HUFFMAN_LOSSLESS                0xc3\n#define FF_PROFILE_MJPEG_JPEG_LS                         0xf7\n\n#define FF_PROFILE_SBC_MSBC                         1\n\n    /**\n     * level\n     * - encoding: Set by user.\n     * - decoding: Set by libavcodec.\n     */\n     int level;\n#define FF_LEVEL_UNKNOWN -99\n\n    /**\n     * Skip loop filtering for selected frames.\n     * - encoding: unused\n     * - decoding: Set by user.\n     */\n    enum AVDiscard skip_loop_filter;\n\n    /**\n     * Skip IDCT/dequantization for selected frames.\n     * - encoding: unused\n     * - decoding: Set by user.\n     */\n    enum AVDiscard skip_idct;\n\n    /**\n     * Skip decoding for selected frames.\n     * - encoding: unused\n     * - decoding: Set by user.\n     */\n    enum AVDiscard skip_frame;\n\n    /**\n     * Header containing style information for text subtitles.\n     * For SUBTITLE_ASS subtitle type, it should contain the whole ASS\n     * [Script Info] and [V4+ Styles] section, plus the [Events] line and\n     * the Format line following. It shouldn't include any Dialogue line.\n     * - encoding: Set/allocated/freed by user (before avcodec_open2())\n     * - decoding: Set/allocated/freed by libavcodec (by avcodec_open2())\n     */\n    uint8_t *subtitle_header;\n    int subtitle_header_size;\n\n#if FF_API_VBV_DELAY\n    /**\n     * VBV delay coded in the last frame (in periods of a 27 MHz clock).\n     * Used for compliant TS muxing.\n     * - encoding: Set by libavcodec.\n     * - decoding: unused.\n     * @deprecated this value is now exported as a part of\n     * AV_PKT_DATA_CPB_PROPERTIES packet side data\n     */\n    attribute_deprecated\n    uint64_t vbv_delay;\n#endif\n\n#if FF_API_SIDEDATA_ONLY_PKT\n    /**\n     * Encoding only and set by default. Allow encoders to output packets\n     * that do not contain any encoded data, only side data.\n     *\n     * Some encoders need to output such packets, e.g. to update some stream\n     * parameters at the end of encoding.\n     *\n     * @deprecated this field disables the default behaviour and\n     *             it is kept only for compatibility.\n     */\n    attribute_deprecated\n    int side_data_only_packets;\n#endif\n\n    /**\n     * Audio only. The number of \"priming\" samples (padding) inserted by the\n     * encoder at the beginning of the audio. I.e. this number of leading\n     * decoded samples must be discarded by the caller to get the original audio\n     * without leading padding.\n     *\n     * - decoding: unused\n     * - encoding: Set by libavcodec. The timestamps on the output packets are\n     *             adjusted by the encoder so that they always refer to the\n     *             first sample of the data actually contained in the packet,\n     *             including any added padding.  E.g. if the timebase is\n     *             1/samplerate and the timestamp of the first input sample is\n     *             0, the timestamp of the first output packet will be\n     *             -initial_padding.\n     */\n    int initial_padding;\n\n    /**\n     * - decoding: For codecs that store a framerate value in the compressed\n     *             bitstream, the decoder may export it here. { 0, 1} when\n     *             unknown.\n     * - encoding: May be used to signal the framerate of CFR content to an\n     *             encoder.\n     */\n    AVRational framerate;\n\n    /**\n     * Nominal unaccelerated pixel format, see AV_PIX_FMT_xxx.\n     * - encoding: unused.\n     * - decoding: Set by libavcodec before calling get_format()\n     */\n    enum AVPixelFormat sw_pix_fmt;\n\n    /**\n     * Timebase in which pkt_dts/pts and AVPacket.dts/pts are.\n     * - encoding unused.\n     * - decoding set by user.\n     */\n    AVRational pkt_timebase;\n\n    /**\n     * AVCodecDescriptor\n     * - encoding: unused.\n     * - decoding: set by libavcodec.\n     */\n    const AVCodecDescriptor *codec_descriptor;\n\n#if !FF_API_LOWRES\n    /**\n     * low resolution decoding, 1-> 1/2 size, 2->1/4 size\n     * - encoding: unused\n     * - decoding: Set by user.\n     */\n     int lowres;\n#endif\n\n    /**\n     * Current statistics for PTS correction.\n     * - decoding: maintained and used by libavcodec, not intended to be used by user apps\n     * - encoding: unused\n     */\n    int64_t pts_correction_num_faulty_pts; /// Number of incorrect PTS values so far\n    int64_t pts_correction_num_faulty_dts; /// Number of incorrect DTS values so far\n    int64_t pts_correction_last_pts;       /// PTS of the last frame\n    int64_t pts_correction_last_dts;       /// DTS of the last frame\n\n    /**\n     * Character encoding of the input subtitles file.\n     * - decoding: set by user\n     * - encoding: unused\n     */\n    char *sub_charenc;\n\n    /**\n     * Subtitles character encoding mode. Formats or codecs might be adjusting\n     * this setting (if they are doing the conversion themselves for instance).\n     * - decoding: set by libavcodec\n     * - encoding: unused\n     */\n    int sub_charenc_mode;\n#define FF_SUB_CHARENC_MODE_DO_NOTHING  -1  ///< do nothing (demuxer outputs a stream supposed to be already in UTF-8, or the codec is bitmap for instance)\n#define FF_SUB_CHARENC_MODE_AUTOMATIC    0  ///< libavcodec will select the mode itself\n#define FF_SUB_CHARENC_MODE_PRE_DECODER  1  ///< the AVPacket data needs to be recoded to UTF-8 before being fed to the decoder, requires iconv\n#define FF_SUB_CHARENC_MODE_IGNORE       2  ///< neither convert the subtitles, nor check them for valid UTF-8\n\n    /**\n     * Skip processing alpha if supported by codec.\n     * Note that if the format uses pre-multiplied alpha (common with VP6,\n     * and recommended due to better video quality/compression)\n     * the image will look as if alpha-blended onto a black background.\n     * However for formats that do not use pre-multiplied alpha\n     * there might be serious artefacts (though e.g. libswscale currently\n     * assumes pre-multiplied alpha anyway).\n     *\n     * - decoding: set by user\n     * - encoding: unused\n     */\n    int skip_alpha;\n\n    /**\n     * Number of samples to skip after a discontinuity\n     * - decoding: unused\n     * - encoding: set by libavcodec\n     */\n    int seek_preroll;\n\n#if !FF_API_DEBUG_MV\n    /**\n     * debug motion vectors\n     * - encoding: Set by user.\n     * - decoding: Set by user.\n     */\n    int debug_mv;\n#define FF_DEBUG_VIS_MV_P_FOR  0x00000001 //visualize forward predicted MVs of P frames\n#define FF_DEBUG_VIS_MV_B_FOR  0x00000002 //visualize forward predicted MVs of B frames\n#define FF_DEBUG_VIS_MV_B_BACK 0x00000004 //visualize backward predicted MVs of B frames\n#endif\n\n    /**\n     * custom intra quantization matrix\n     * - encoding: Set by user, can be NULL.\n     * - decoding: unused.\n     */\n    uint16_t *chroma_intra_matrix;\n\n    /**\n     * dump format separator.\n     * can be \", \" or \"\\n      \" or anything else\n     * - encoding: Set by user.\n     * - decoding: Set by user.\n     */\n    uint8_t *dump_separator;\n\n    /**\n     * ',' separated list of allowed decoders.\n     * If NULL then all are allowed\n     * - encoding: unused\n     * - decoding: set by user\n     */\n    char *codec_whitelist;\n\n    /**\n     * Properties of the stream that gets decoded\n     * - encoding: unused\n     * - decoding: set by libavcodec\n     */\n    unsigned properties;\n#define FF_CODEC_PROPERTY_LOSSLESS        0x00000001\n#define FF_CODEC_PROPERTY_CLOSED_CAPTIONS 0x00000002\n\n    /**\n     * Additional data associated with the entire coded stream.\n     *\n     * - decoding: unused\n     * - encoding: may be set by libavcodec after avcodec_open2().\n     */\n    AVPacketSideData *coded_side_data;\n    int            nb_coded_side_data;\n\n    /**\n     * A reference to the AVHWFramesContext describing the input (for encoding)\n     * or output (decoding) frames. The reference is set by the caller and\n     * afterwards owned (and freed) by libavcodec - it should never be read by\n     * the caller after being set.\n     *\n     * - decoding: This field should be set by the caller from the get_format()\n     *             callback. The previous reference (if any) will always be\n     *             unreffed by libavcodec before the get_format() call.\n     *\n     *             If the default get_buffer2() is used with a hwaccel pixel\n     *             format, then this AVHWFramesContext will be used for\n     *             allocating the frame buffers.\n     *\n     * - encoding: For hardware encoders configured to use a hwaccel pixel\n     *             format, this field should be set by the caller to a reference\n     *             to the AVHWFramesContext describing input frames.\n     *             AVHWFramesContext.format must be equal to\n     *             AVCodecContext.pix_fmt.\n     *\n     *             This field should be set before avcodec_open2() is called.\n     */\n    AVBufferRef *hw_frames_ctx;\n\n    /**\n     * Control the form of AVSubtitle.rects[N]->ass\n     * - decoding: set by user\n     * - encoding: unused\n     */\n    int sub_text_format;\n#define FF_SUB_TEXT_FMT_ASS              0\n#if FF_API_ASS_TIMING\n#define FF_SUB_TEXT_FMT_ASS_WITH_TIMINGS 1\n#endif\n\n    /**\n     * Audio only. The amount of padding (in samples) appended by the encoder to\n     * the end of the audio. I.e. this number of decoded samples must be\n     * discarded by the caller from the end of the stream to get the original\n     * audio without any trailing padding.\n     *\n     * - decoding: unused\n     * - encoding: unused\n     */\n    int trailing_padding;\n\n    /**\n     * The number of pixels per image to maximally accept.\n     *\n     * - decoding: set by user\n     * - encoding: set by user\n     */\n    int64_t max_pixels;\n\n    /**\n     * A reference to the AVHWDeviceContext describing the device which will\n     * be used by a hardware encoder/decoder.  The reference is set by the\n     * caller and afterwards owned (and freed) by libavcodec.\n     *\n     * This should be used if either the codec device does not require\n     * hardware frames or any that are used are to be allocated internally by\n     * libavcodec.  If the user wishes to supply any of the frames used as\n     * encoder input or decoder output then hw_frames_ctx should be used\n     * instead.  When hw_frames_ctx is set in get_format() for a decoder, this\n     * field will be ignored while decoding the associated stream segment, but\n     * may again be used on a following one after another get_format() call.\n     *\n     * For both encoders and decoders this field should be set before\n     * avcodec_open2() is called and must not be written to thereafter.\n     *\n     * Note that some decoders may require this field to be set initially in\n     * order to support hw_frames_ctx at all - in that case, all frames\n     * contexts used must be created on the same device.\n     */\n    AVBufferRef *hw_device_ctx;\n\n    /**\n     * Bit set of AV_HWACCEL_FLAG_* flags, which affect hardware accelerated\n     * decoding (if active).\n     * - encoding: unused\n     * - decoding: Set by user (either before avcodec_open2(), or in the\n     *             AVCodecContext.get_format callback)\n     */\n    int hwaccel_flags;\n\n    /**\n     * Video decoding only. Certain video codecs support cropping, meaning that\n     * only a sub-rectangle of the decoded frame is intended for display.  This\n     * option controls how cropping is handled by libavcodec.\n     *\n     * When set to 1 (the default), libavcodec will apply cropping internally.\n     * I.e. it will modify the output frame width/height fields and offset the\n     * data pointers (only by as much as possible while preserving alignment, or\n     * by the full amount if the AV_CODEC_FLAG_UNALIGNED flag is set) so that\n     * the frames output by the decoder refer only to the cropped area. The\n     * crop_* fields of the output frames will be zero.\n     *\n     * When set to 0, the width/height fields of the output frames will be set\n     * to the coded dimensions and the crop_* fields will describe the cropping\n     * rectangle. Applying the cropping is left to the caller.\n     *\n     * @warning When hardware acceleration with opaque output frames is used,\n     * libavcodec is unable to apply cropping from the top/left border.\n     *\n     * @note when this option is set to zero, the width/height fields of the\n     * AVCodecContext and output AVFrames have different meanings. The codec\n     * context fields store display dimensions (with the coded dimensions in\n     * coded_width/height), while the frame fields store the coded dimensions\n     * (with the display dimensions being determined by the crop_* fields).\n     */\n    int apply_cropping;\n\n    /*\n     * Video decoding only.  Sets the number of extra hardware frames which\n     * the decoder will allocate for use by the caller.  This must be set\n     * before avcodec_open2() is called.\n     *\n     * Some hardware decoders require all frames that they will use for\n     * output to be defined in advance before decoding starts.  For such\n     * decoders, the hardware frame pool must therefore be of a fixed size.\n     * The extra frames set here are on top of any number that the decoder\n     * needs internally in order to operate normally (for example, frames\n     * used as reference pictures).\n     */\n    int extra_hw_frames;\n} AVCodecContext;\n\n#if FF_API_CODEC_GET_SET\n/**\n * Accessors for some AVCodecContext fields. These used to be provided for ABI\n * compatibility, and do not need to be used anymore.\n */\nattribute_deprecated\nAVRational av_codec_get_pkt_timebase         (const AVCodecContext *avctx);\nattribute_deprecated\nvoid       av_codec_set_pkt_timebase         (AVCodecContext *avctx, AVRational val);\n\nattribute_deprecated\nconst AVCodecDescriptor *av_codec_get_codec_descriptor(const AVCodecContext *avctx);\nattribute_deprecated\nvoid                     av_codec_set_codec_descriptor(AVCodecContext *avctx, const AVCodecDescriptor *desc);\n\nattribute_deprecated\nunsigned av_codec_get_codec_properties(const AVCodecContext *avctx);\n\n#if FF_API_LOWRES\nattribute_deprecated\nint  av_codec_get_lowres(const AVCodecContext *avctx);\nattribute_deprecated\nvoid av_codec_set_lowres(AVCodecContext *avctx, int val);\n#endif\n\nattribute_deprecated\nint  av_codec_get_seek_preroll(const AVCodecContext *avctx);\nattribute_deprecated\nvoid av_codec_set_seek_preroll(AVCodecContext *avctx, int val);\n\nattribute_deprecated\nuint16_t *av_codec_get_chroma_intra_matrix(const AVCodecContext *avctx);\nattribute_deprecated\nvoid av_codec_set_chroma_intra_matrix(AVCodecContext *avctx, uint16_t *val);\n#endif\n\n/**\n * AVProfile.\n */\ntypedef struct AVProfile {\n    int profile;\n    const char *name; ///< short name for the profile\n} AVProfile;\n\nenum {\n    /**\n     * The codec supports this format via the hw_device_ctx interface.\n     *\n     * When selecting this format, AVCodecContext.hw_device_ctx should\n     * have been set to a device of the specified type before calling\n     * avcodec_open2().\n     */\n    AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX = 0x01,\n    /**\n     * The codec supports this format via the hw_frames_ctx interface.\n     *\n     * When selecting this format for a decoder,\n     * AVCodecContext.hw_frames_ctx should be set to a suitable frames\n     * context inside the get_format() callback.  The frames context\n     * must have been created on a device of the specified type.\n     */\n    AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX = 0x02,\n    /**\n     * The codec supports this format by some internal method.\n     *\n     * This format can be selected without any additional configuration -\n     * no device or frames context is required.\n     */\n    AV_CODEC_HW_CONFIG_METHOD_INTERNAL      = 0x04,\n    /**\n     * The codec supports this format by some ad-hoc method.\n     *\n     * Additional settings and/or function calls are required.  See the\n     * codec-specific documentation for details.  (Methods requiring\n     * this sort of configuration are deprecated and others should be\n     * used in preference.)\n     */\n    AV_CODEC_HW_CONFIG_METHOD_AD_HOC        = 0x08,\n};\n\ntypedef struct AVCodecHWConfig {\n    /**\n     * A hardware pixel format which the codec can use.\n     */\n    enum AVPixelFormat pix_fmt;\n    /**\n     * Bit set of AV_CODEC_HW_CONFIG_METHOD_* flags, describing the possible\n     * setup methods which can be used with this configuration.\n     */\n    int methods;\n    /**\n     * The device type associated with the configuration.\n     *\n     * Must be set for AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX and\n     * AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX, otherwise unused.\n     */\n    enum AVHWDeviceType device_type;\n} AVCodecHWConfig;\n\ntypedef struct AVCodecDefault AVCodecDefault;\n\nstruct AVSubtitle;\n\n/**\n * AVCodec.\n */\ntypedef struct AVCodec {\n    /**\n     * Name of the codec implementation.\n     * The name is globally unique among encoders and among decoders (but an\n     * encoder and a decoder can share the same name).\n     * This is the primary way to find a codec from the user perspective.\n     */\n    const char *name;\n    /**\n     * Descriptive name for the codec, meant to be more human readable than name.\n     * You should use the NULL_IF_CONFIG_SMALL() macro to define it.\n     */\n    const char *long_name;\n    enum AVMediaType type;\n    enum AVCodecID id;\n    /**\n     * Codec capabilities.\n     * see AV_CODEC_CAP_*\n     */\n    int capabilities;\n    const AVRational *supported_framerates; ///< array of supported framerates, or NULL if any, array is terminated by {0,0}\n    const enum AVPixelFormat *pix_fmts;     ///< array of supported pixel formats, or NULL if unknown, array is terminated by -1\n    const int *supported_samplerates;       ///< array of supported audio samplerates, or NULL if unknown, array is terminated by 0\n    const enum AVSampleFormat *sample_fmts; ///< array of supported sample formats, or NULL if unknown, array is terminated by -1\n    const uint64_t *channel_layouts;         ///< array of support channel layouts, or NULL if unknown. array is terminated by 0\n    uint8_t max_lowres;                     ///< maximum value for lowres supported by the decoder\n    const AVClass *priv_class;              ///< AVClass for the private context\n    const AVProfile *profiles;              ///< array of recognized profiles, or NULL if unknown, array is terminated by {FF_PROFILE_UNKNOWN}\n\n    /**\n     * Group name of the codec implementation.\n     * This is a short symbolic name of the wrapper backing this codec. A\n     * wrapper uses some kind of external implementation for the codec, such\n     * as an external library, or a codec implementation provided by the OS or\n     * the hardware.\n     * If this field is NULL, this is a builtin, libavcodec native codec.\n     * If non-NULL, this will be the suffix in AVCodec.name in most cases\n     * (usually AVCodec.name will be of the form \"<codec_name>_<wrapper_name>\").\n     */\n    const char *wrapper_name;\n\n    /*****************************************************************\n     * No fields below this line are part of the public API. They\n     * may not be used outside of libavcodec and can be changed and\n     * removed at will.\n     * New public fields should be added right above.\n     *****************************************************************\n     */\n    int priv_data_size;\n    struct AVCodec *next;\n    /**\n     * @name Frame-level threading support functions\n     * @{\n     */\n    /**\n     * If defined, called on thread contexts when they are created.\n     * If the codec allocates writable tables in init(), re-allocate them here.\n     * priv_data will be set to a copy of the original.\n     */\n    int (*init_thread_copy)(AVCodecContext *);\n    /**\n     * Copy necessary context variables from a previous thread context to the current one.\n     * If not defined, the next thread will start automatically; otherwise, the codec\n     * must call ff_thread_finish_setup().\n     *\n     * dst and src will (rarely) point to the same context, in which case memcpy should be skipped.\n     */\n    int (*update_thread_context)(AVCodecContext *dst, const AVCodecContext *src);\n    /** @} */\n\n    /**\n     * Private codec-specific defaults.\n     */\n    const AVCodecDefault *defaults;\n\n    /**\n     * Initialize codec static data, called from avcodec_register().\n     *\n     * This is not intended for time consuming operations as it is\n     * run for every codec regardless of that codec being used.\n     */\n    void (*init_static_data)(struct AVCodec *codec);\n\n    int (*init)(AVCodecContext *);\n    int (*encode_sub)(AVCodecContext *, uint8_t *buf, int buf_size,\n                      const struct AVSubtitle *sub);\n    /**\n     * Encode data to an AVPacket.\n     *\n     * @param      avctx          codec context\n     * @param      avpkt          output AVPacket (may contain a user-provided buffer)\n     * @param[in]  frame          AVFrame containing the raw data to be encoded\n     * @param[out] got_packet_ptr encoder sets to 0 or 1 to indicate that a\n     *                            non-empty packet was returned in avpkt.\n     * @return 0 on success, negative error code on failure\n     */\n    int (*encode2)(AVCodecContext *avctx, AVPacket *avpkt, const AVFrame *frame,\n                   int *got_packet_ptr);\n    int (*decode)(AVCodecContext *, void *outdata, int *outdata_size, AVPacket *avpkt);\n    int (*close)(AVCodecContext *);\n    /**\n     * Encode API with decoupled packet/frame dataflow. The API is the\n     * same as the avcodec_ prefixed APIs (avcodec_send_frame() etc.), except\n     * that:\n     * - never called if the codec is closed or the wrong type,\n     * - if AV_CODEC_CAP_DELAY is not set, drain frames are never sent,\n     * - only one drain frame is ever passed down,\n     */\n    int (*send_frame)(AVCodecContext *avctx, const AVFrame *frame);\n    int (*receive_packet)(AVCodecContext *avctx, AVPacket *avpkt);\n\n    /**\n     * Decode API with decoupled packet/frame dataflow. This function is called\n     * to get one output frame. It should call ff_decode_get_packet() to obtain\n     * input data.\n     */\n    int (*receive_frame)(AVCodecContext *avctx, AVFrame *frame);\n    /**\n     * Flush buffers.\n     * Will be called when seeking\n     */\n    void (*flush)(AVCodecContext *);\n    /**\n     * Internal codec capabilities.\n     * See FF_CODEC_CAP_* in internal.h\n     */\n    int caps_internal;\n\n    /**\n     * Decoding only, a comma-separated list of bitstream filters to apply to\n     * packets before decoding.\n     */\n    const char *bsfs;\n\n    /**\n     * Array of pointers to hardware configurations supported by the codec,\n     * or NULL if no hardware supported.  The array is terminated by a NULL\n     * pointer.\n     *\n     * The user can only access this field via avcodec_get_hw_config().\n     */\n    const struct AVCodecHWConfigInternal **hw_configs;\n} AVCodec;\n\n#if FF_API_CODEC_GET_SET\nattribute_deprecated\nint av_codec_get_max_lowres(const AVCodec *codec);\n#endif\n\nstruct MpegEncContext;\n\n/**\n * Retrieve supported hardware configurations for a codec.\n *\n * Values of index from zero to some maximum return the indexed configuration\n * descriptor; all other values return NULL.  If the codec does not support\n * any hardware configurations then it will always return NULL.\n */\nconst AVCodecHWConfig *avcodec_get_hw_config(const AVCodec *codec, int index);\n\n/**\n * @defgroup lavc_hwaccel AVHWAccel\n *\n * @note  Nothing in this structure should be accessed by the user.  At some\n *        point in future it will not be externally visible at all.\n *\n * @{\n */\ntypedef struct AVHWAccel {\n    /**\n     * Name of the hardware accelerated codec.\n     * The name is globally unique among encoders and among decoders (but an\n     * encoder and a decoder can share the same name).\n     */\n    const char *name;\n\n    /**\n     * Type of codec implemented by the hardware accelerator.\n     *\n     * See AVMEDIA_TYPE_xxx\n     */\n    enum AVMediaType type;\n\n    /**\n     * Codec implemented by the hardware accelerator.\n     *\n     * See AV_CODEC_ID_xxx\n     */\n    enum AVCodecID id;\n\n    /**\n     * Supported pixel format.\n     *\n     * Only hardware accelerated formats are supported here.\n     */\n    enum AVPixelFormat pix_fmt;\n\n    /**\n     * Hardware accelerated codec capabilities.\n     * see AV_HWACCEL_CODEC_CAP_*\n     */\n    int capabilities;\n\n    /*****************************************************************\n     * No fields below this line are part of the public API. They\n     * may not be used outside of libavcodec and can be changed and\n     * removed at will.\n     * New public fields should be added right above.\n     *****************************************************************\n     */\n\n    /**\n     * Allocate a custom buffer\n     */\n    int (*alloc_frame)(AVCodecContext *avctx, AVFrame *frame);\n\n    /**\n     * Called at the beginning of each frame or field picture.\n     *\n     * Meaningful frame information (codec specific) is guaranteed to\n     * be parsed at this point. This function is mandatory.\n     *\n     * Note that buf can be NULL along with buf_size set to 0.\n     * Otherwise, this means the whole frame is available at this point.\n     *\n     * @param avctx the codec context\n     * @param buf the frame data buffer base\n     * @param buf_size the size of the frame in bytes\n     * @return zero if successful, a negative value otherwise\n     */\n    int (*start_frame)(AVCodecContext *avctx, const uint8_t *buf, uint32_t buf_size);\n\n    /**\n     * Callback for parameter data (SPS/PPS/VPS etc).\n     *\n     * Useful for hardware decoders which keep persistent state about the\n     * video parameters, and need to receive any changes to update that state.\n     *\n     * @param avctx the codec context\n     * @param type the nal unit type\n     * @param buf the nal unit data buffer\n     * @param buf_size the size of the nal unit in bytes\n     * @return zero if successful, a negative value otherwise\n     */\n    int (*decode_params)(AVCodecContext *avctx, int type, const uint8_t *buf, uint32_t buf_size);\n\n    /**\n     * Callback for each slice.\n     *\n     * Meaningful slice information (codec specific) is guaranteed to\n     * be parsed at this point. This function is mandatory.\n     * The only exception is XvMC, that works on MB level.\n     *\n     * @param avctx the codec context\n     * @param buf the slice data buffer base\n     * @param buf_size the size of the slice in bytes\n     * @return zero if successful, a negative value otherwise\n     */\n    int (*decode_slice)(AVCodecContext *avctx, const uint8_t *buf, uint32_t buf_size);\n\n    /**\n     * Called at the end of each frame or field picture.\n     *\n     * The whole picture is parsed at this point and can now be sent\n     * to the hardware accelerator. This function is mandatory.\n     *\n     * @param avctx the codec context\n     * @return zero if successful, a negative value otherwise\n     */\n    int (*end_frame)(AVCodecContext *avctx);\n\n    /**\n     * Size of per-frame hardware accelerator private data.\n     *\n     * Private data is allocated with av_mallocz() before\n     * AVCodecContext.get_buffer() and deallocated after\n     * AVCodecContext.release_buffer().\n     */\n    int frame_priv_data_size;\n\n    /**\n     * Called for every Macroblock in a slice.\n     *\n     * XvMC uses it to replace the ff_mpv_reconstruct_mb().\n     * Instead of decoding to raw picture, MB parameters are\n     * stored in an array provided by the video driver.\n     *\n     * @param s the mpeg context\n     */\n    void (*decode_mb)(struct MpegEncContext *s);\n\n    /**\n     * Initialize the hwaccel private data.\n     *\n     * This will be called from ff_get_format(), after hwaccel and\n     * hwaccel_context are set and the hwaccel private data in AVCodecInternal\n     * is allocated.\n     */\n    int (*init)(AVCodecContext *avctx);\n\n    /**\n     * Uninitialize the hwaccel private data.\n     *\n     * This will be called from get_format() or avcodec_close(), after hwaccel\n     * and hwaccel_context are already uninitialized.\n     */\n    int (*uninit)(AVCodecContext *avctx);\n\n    /**\n     * Size of the private data to allocate in\n     * AVCodecInternal.hwaccel_priv_data.\n     */\n    int priv_data_size;\n\n    /**\n     * Internal hwaccel capabilities.\n     */\n    int caps_internal;\n\n    /**\n     * Fill the given hw_frames context with current codec parameters. Called\n     * from get_format. Refer to avcodec_get_hw_frames_parameters() for\n     * details.\n     *\n     * This CAN be called before AVHWAccel.init is called, and you must assume\n     * that avctx->hwaccel_priv_data is invalid.\n     */\n    int (*frame_params)(AVCodecContext *avctx, AVBufferRef *hw_frames_ctx);\n} AVHWAccel;\n\n/**\n * HWAccel is experimental and is thus avoided in favor of non experimental\n * codecs\n */\n#define AV_HWACCEL_CODEC_CAP_EXPERIMENTAL 0x0200\n\n/**\n * Hardware acceleration should be used for decoding even if the codec level\n * used is unknown or higher than the maximum supported level reported by the\n * hardware driver.\n *\n * It's generally a good idea to pass this flag unless you have a specific\n * reason not to, as hardware tends to under-report supported levels.\n */\n#define AV_HWACCEL_FLAG_IGNORE_LEVEL (1 << 0)\n\n/**\n * Hardware acceleration can output YUV pixel formats with a different chroma\n * sampling than 4:2:0 and/or other than 8 bits per component.\n */\n#define AV_HWACCEL_FLAG_ALLOW_HIGH_DEPTH (1 << 1)\n\n/**\n * Hardware acceleration should still be attempted for decoding when the\n * codec profile does not match the reported capabilities of the hardware.\n *\n * For example, this can be used to try to decode baseline profile H.264\n * streams in hardware - it will often succeed, because many streams marked\n * as baseline profile actually conform to constrained baseline profile.\n *\n * @warning If the stream is actually not supported then the behaviour is\n *          undefined, and may include returning entirely incorrect output\n *          while indicating success.\n */\n#define AV_HWACCEL_FLAG_ALLOW_PROFILE_MISMATCH (1 << 2)\n\n/**\n * @}\n */\n\n#if FF_API_AVPICTURE\n/**\n * @defgroup lavc_picture AVPicture\n *\n * Functions for working with AVPicture\n * @{\n */\n\n/**\n * Picture data structure.\n *\n * Up to four components can be stored into it, the last component is\n * alpha.\n * @deprecated use AVFrame or imgutils functions instead\n */\ntypedef struct AVPicture {\n    attribute_deprecated\n    uint8_t *data[AV_NUM_DATA_POINTERS];    ///< pointers to the image data planes\n    attribute_deprecated\n    int linesize[AV_NUM_DATA_POINTERS];     ///< number of bytes per line\n} AVPicture;\n\n/**\n * @}\n */\n#endif\n\nenum AVSubtitleType {\n    SUBTITLE_NONE,\n\n    SUBTITLE_BITMAP,                ///< A bitmap, pict will be set\n\n    /**\n     * Plain text, the text field must be set by the decoder and is\n     * authoritative. ass and pict fields may contain approximations.\n     */\n    SUBTITLE_TEXT,\n\n    /**\n     * Formatted text, the ass field must be set by the decoder and is\n     * authoritative. pict and text fields may contain approximations.\n     */\n    SUBTITLE_ASS,\n};\n\n#define AV_SUBTITLE_FLAG_FORCED 0x00000001\n\ntypedef struct AVSubtitleRect {\n    int x;         ///< top left corner  of pict, undefined when pict is not set\n    int y;         ///< top left corner  of pict, undefined when pict is not set\n    int w;         ///< width            of pict, undefined when pict is not set\n    int h;         ///< height           of pict, undefined when pict is not set\n    int nb_colors; ///< number of colors in pict, undefined when pict is not set\n\n#if FF_API_AVPICTURE\n    /**\n     * @deprecated unused\n     */\n    attribute_deprecated\n    AVPicture pict;\n#endif\n    /**\n     * data+linesize for the bitmap of this subtitle.\n     * Can be set for text/ass as well once they are rendered.\n     */\n    uint8_t *data[4];\n    int linesize[4];\n\n    enum AVSubtitleType type;\n\n    char *text;                     ///< 0 terminated plain UTF-8 text\n\n    /**\n     * 0 terminated ASS/SSA compatible event line.\n     * The presentation of this is unaffected by the other values in this\n     * struct.\n     */\n    char *ass;\n\n    int flags;\n} AVSubtitleRect;\n\ntypedef struct AVSubtitle {\n    uint16_t format; /* 0 = graphics */\n    uint32_t start_display_time; /* relative to packet pts, in ms */\n    uint32_t end_display_time; /* relative to packet pts, in ms */\n    unsigned num_rects;\n    AVSubtitleRect **rects;\n    int64_t pts;    ///< Same as packet pts, in AV_TIME_BASE\n} AVSubtitle;\n\n/**\n * This struct describes the properties of an encoded stream.\n *\n * sizeof(AVCodecParameters) is not a part of the public ABI, this struct must\n * be allocated with avcodec_parameters_alloc() and freed with\n * avcodec_parameters_free().\n */\ntypedef struct AVCodecParameters {\n    /**\n     * General type of the encoded data.\n     */\n    enum AVMediaType codec_type;\n    /**\n     * Specific type of the encoded data (the codec used).\n     */\n    enum AVCodecID   codec_id;\n    /**\n     * Additional information about the codec (corresponds to the AVI FOURCC).\n     */\n    uint32_t         codec_tag;\n\n    /**\n     * Extra binary data needed for initializing the decoder, codec-dependent.\n     *\n     * Must be allocated with av_malloc() and will be freed by\n     * avcodec_parameters_free(). The allocated size of extradata must be at\n     * least extradata_size + AV_INPUT_BUFFER_PADDING_SIZE, with the padding\n     * bytes zeroed.\n     */\n    uint8_t *extradata;\n    /**\n     * Size of the extradata content in bytes.\n     */\n    int      extradata_size;\n\n    /**\n     * - video: the pixel format, the value corresponds to enum AVPixelFormat.\n     * - audio: the sample format, the value corresponds to enum AVSampleFormat.\n     */\n    int format;\n\n    /**\n     * The average bitrate of the encoded data (in bits per second).\n     */\n    int64_t bit_rate;\n\n    /**\n     * The number of bits per sample in the codedwords.\n     *\n     * This is basically the bitrate per sample. It is mandatory for a bunch of\n     * formats to actually decode them. It's the number of bits for one sample in\n     * the actual coded bitstream.\n     *\n     * This could be for example 4 for ADPCM\n     * For PCM formats this matches bits_per_raw_sample\n     * Can be 0\n     */\n    int bits_per_coded_sample;\n\n    /**\n     * This is the number of valid bits in each output sample. If the\n     * sample format has more bits, the least significant bits are additional\n     * padding bits, which are always 0. Use right shifts to reduce the sample\n     * to its actual size. For example, audio formats with 24 bit samples will\n     * have bits_per_raw_sample set to 24, and format set to AV_SAMPLE_FMT_S32.\n     * To get the original sample use \"(int32_t)sample >> 8\".\"\n     *\n     * For ADPCM this might be 12 or 16 or similar\n     * Can be 0\n     */\n    int bits_per_raw_sample;\n\n    /**\n     * Codec-specific bitstream restrictions that the stream conforms to.\n     */\n    int profile;\n    int level;\n\n    /**\n     * Video only. The dimensions of the video frame in pixels.\n     */\n    int width;\n    int height;\n\n    /**\n     * Video only. The aspect ratio (width / height) which a single pixel\n     * should have when displayed.\n     *\n     * When the aspect ratio is unknown / undefined, the numerator should be\n     * set to 0 (the denominator may have any value).\n     */\n    AVRational sample_aspect_ratio;\n\n    /**\n     * Video only. The order of the fields in interlaced video.\n     */\n    enum AVFieldOrder                  field_order;\n\n    /**\n     * Video only. Additional colorspace characteristics.\n     */\n    enum AVColorRange                  color_range;\n    enum AVColorPrimaries              color_primaries;\n    enum AVColorTransferCharacteristic color_trc;\n    enum AVColorSpace                  color_space;\n    enum AVChromaLocation              chroma_location;\n\n    /**\n     * Video only. Number of delayed frames.\n     */\n    int video_delay;\n\n    /**\n     * Audio only. The channel layout bitmask. May be 0 if the channel layout is\n     * unknown or unspecified, otherwise the number of bits set must be equal to\n     * the channels field.\n     */\n    uint64_t channel_layout;\n    /**\n     * Audio only. The number of audio channels.\n     */\n    int      channels;\n    /**\n     * Audio only. The number of audio samples per second.\n     */\n    int      sample_rate;\n    /**\n     * Audio only. The number of bytes per coded audio frame, required by some\n     * formats.\n     *\n     * Corresponds to nBlockAlign in WAVEFORMATEX.\n     */\n    int      block_align;\n    /**\n     * Audio only. Audio frame size, if known. Required by some formats to be static.\n     */\n    int      frame_size;\n\n    /**\n     * Audio only. The amount of padding (in samples) inserted by the encoder at\n     * the beginning of the audio. I.e. this number of leading decoded samples\n     * must be discarded by the caller to get the original audio without leading\n     * padding.\n     */\n    int initial_padding;\n    /**\n     * Audio only. The amount of padding (in samples) appended by the encoder to\n     * the end of the audio. I.e. this number of decoded samples must be\n     * discarded by the caller from the end of the stream to get the original\n     * audio without any trailing padding.\n     */\n    int trailing_padding;\n    /**\n     * Audio only. Number of samples to skip after a discontinuity.\n     */\n    int seek_preroll;\n} AVCodecParameters;\n\n/**\n * Iterate over all registered codecs.\n *\n * @param opaque a pointer where libavcodec will store the iteration state. Must\n *               point to NULL to start the iteration.\n *\n * @return the next registered codec or NULL when the iteration is\n *         finished\n */\nconst AVCodec *av_codec_iterate(void **opaque);\n\n#if FF_API_NEXT\n/**\n * If c is NULL, returns the first registered codec,\n * if c is non-NULL, returns the next registered codec after c,\n * or NULL if c is the last one.\n */\nattribute_deprecated\nAVCodec *av_codec_next(const AVCodec *c);\n#endif\n\n/**\n * Return the LIBAVCODEC_VERSION_INT constant.\n */\nunsigned avcodec_version(void);\n\n/**\n * Return the libavcodec build-time configuration.\n */\nconst char *avcodec_configuration(void);\n\n/**\n * Return the libavcodec license.\n */\nconst char *avcodec_license(void);\n\n#if FF_API_NEXT\n/**\n * Register the codec codec and initialize libavcodec.\n *\n * @warning either this function or avcodec_register_all() must be called\n * before any other libavcodec functions.\n *\n * @see avcodec_register_all()\n */\nattribute_deprecated\nvoid avcodec_register(AVCodec *codec);\n\n/**\n * Register all the codecs, parsers and bitstream filters which were enabled at\n * configuration time. If you do not call this function you can select exactly\n * which formats you want to support, by using the individual registration\n * functions.\n *\n * @see avcodec_register\n * @see av_register_codec_parser\n * @see av_register_bitstream_filter\n */\nattribute_deprecated\nvoid avcodec_register_all(void);\n#endif\n\n/**\n * Allocate an AVCodecContext and set its fields to default values. The\n * resulting struct should be freed with avcodec_free_context().\n *\n * @param codec if non-NULL, allocate private data and initialize defaults\n *              for the given codec. It is illegal to then call avcodec_open2()\n *              with a different codec.\n *              If NULL, then the codec-specific defaults won't be initialized,\n *              which may result in suboptimal default settings (this is\n *              important mainly for encoders, e.g. libx264).\n *\n * @return An AVCodecContext filled with default values or NULL on failure.\n */\nAVCodecContext *avcodec_alloc_context3(const AVCodec *codec);\n\n/**\n * Free the codec context and everything associated with it and write NULL to\n * the provided pointer.\n */\nvoid avcodec_free_context(AVCodecContext **avctx);\n\n#if FF_API_GET_CONTEXT_DEFAULTS\n/**\n * @deprecated This function should not be used, as closing and opening a codec\n * context multiple time is not supported. A new codec context should be\n * allocated for each new use.\n */\nint avcodec_get_context_defaults3(AVCodecContext *s, const AVCodec *codec);\n#endif\n\n/**\n * Get the AVClass for AVCodecContext. It can be used in combination with\n * AV_OPT_SEARCH_FAKE_OBJ for examining options.\n *\n * @see av_opt_find().\n */\nconst AVClass *avcodec_get_class(void);\n\n#if FF_API_COPY_CONTEXT\n/**\n * Get the AVClass for AVFrame. It can be used in combination with\n * AV_OPT_SEARCH_FAKE_OBJ for examining options.\n *\n * @see av_opt_find().\n */\nconst AVClass *avcodec_get_frame_class(void);\n\n/**\n * Get the AVClass for AVSubtitleRect. It can be used in combination with\n * AV_OPT_SEARCH_FAKE_OBJ for examining options.\n *\n * @see av_opt_find().\n */\nconst AVClass *avcodec_get_subtitle_rect_class(void);\n\n/**\n * Copy the settings of the source AVCodecContext into the destination\n * AVCodecContext. The resulting destination codec context will be\n * unopened, i.e. you are required to call avcodec_open2() before you\n * can use this AVCodecContext to decode/encode video/audio data.\n *\n * @param dest target codec context, should be initialized with\n *             avcodec_alloc_context3(NULL), but otherwise uninitialized\n * @param src source codec context\n * @return AVERROR() on error (e.g. memory allocation error), 0 on success\n *\n * @deprecated The semantics of this function are ill-defined and it should not\n * be used. If you need to transfer the stream parameters from one codec context\n * to another, use an intermediate AVCodecParameters instance and the\n * avcodec_parameters_from_context() / avcodec_parameters_to_context()\n * functions.\n */\nattribute_deprecated\nint avcodec_copy_context(AVCodecContext *dest, const AVCodecContext *src);\n#endif\n\n/**\n * Allocate a new AVCodecParameters and set its fields to default values\n * (unknown/invalid/0). The returned struct must be freed with\n * avcodec_parameters_free().\n */\nAVCodecParameters *avcodec_parameters_alloc(void);\n\n/**\n * Free an AVCodecParameters instance and everything associated with it and\n * write NULL to the supplied pointer.\n */\nvoid avcodec_parameters_free(AVCodecParameters **par);\n\n/**\n * Copy the contents of src to dst. Any allocated fields in dst are freed and\n * replaced with newly allocated duplicates of the corresponding fields in src.\n *\n * @return >= 0 on success, a negative AVERROR code on failure.\n */\nint avcodec_parameters_copy(AVCodecParameters *dst, const AVCodecParameters *src);\n\n/**\n * Fill the parameters struct based on the values from the supplied codec\n * context. Any allocated fields in par are freed and replaced with duplicates\n * of the corresponding fields in codec.\n *\n * @return >= 0 on success, a negative AVERROR code on failure\n */\nint avcodec_parameters_from_context(AVCodecParameters *par,\n                                    const AVCodecContext *codec);\n\n/**\n * Fill the codec context based on the values from the supplied codec\n * parameters. Any allocated fields in codec that have a corresponding field in\n * par are freed and replaced with duplicates of the corresponding field in par.\n * Fields in codec that do not have a counterpart in par are not touched.\n *\n * @return >= 0 on success, a negative AVERROR code on failure.\n */\nint avcodec_parameters_to_context(AVCodecContext *codec,\n                                  const AVCodecParameters *par);\n\n/**\n * Initialize the AVCodecContext to use the given AVCodec. Prior to using this\n * function the context has to be allocated with avcodec_alloc_context3().\n *\n * The functions avcodec_find_decoder_by_name(), avcodec_find_encoder_by_name(),\n * avcodec_find_decoder() and avcodec_find_encoder() provide an easy way for\n * retrieving a codec.\n *\n * @warning This function is not thread safe!\n *\n * @note Always call this function before using decoding routines (such as\n * @ref avcodec_receive_frame()).\n *\n * @code\n * avcodec_register_all();\n * av_dict_set(&opts, \"b\", \"2.5M\", 0);\n * codec = avcodec_find_decoder(AV_CODEC_ID_H264);\n * if (!codec)\n *     exit(1);\n *\n * context = avcodec_alloc_context3(codec);\n *\n * if (avcodec_open2(context, codec, opts) < 0)\n *     exit(1);\n * @endcode\n *\n * @param avctx The context to initialize.\n * @param codec The codec to open this context for. If a non-NULL codec has been\n *              previously passed to avcodec_alloc_context3() or\n *              for this context, then this parameter MUST be either NULL or\n *              equal to the previously passed codec.\n * @param options A dictionary filled with AVCodecContext and codec-private options.\n *                On return this object will be filled with options that were not found.\n *\n * @return zero on success, a negative value on error\n * @see avcodec_alloc_context3(), avcodec_find_decoder(), avcodec_find_encoder(),\n *      av_dict_set(), av_opt_find().\n */\nint avcodec_open2(AVCodecContext *avctx, const AVCodec *codec, AVDictionary **options);\n\n/**\n * Close a given AVCodecContext and free all the data associated with it\n * (but not the AVCodecContext itself).\n *\n * Calling this function on an AVCodecContext that hasn't been opened will free\n * the codec-specific data allocated in avcodec_alloc_context3() with a non-NULL\n * codec. Subsequent calls will do nothing.\n *\n * @note Do not use this function. Use avcodec_free_context() to destroy a\n * codec context (either open or closed). Opening and closing a codec context\n * multiple times is not supported anymore -- use multiple codec contexts\n * instead.\n */\nint avcodec_close(AVCodecContext *avctx);\n\n/**\n * Free all allocated data in the given subtitle struct.\n *\n * @param sub AVSubtitle to free.\n */\nvoid avsubtitle_free(AVSubtitle *sub);\n\n/**\n * @}\n */\n\n/**\n * @addtogroup lavc_packet\n * @{\n */\n\n/**\n * Allocate an AVPacket and set its fields to default values.  The resulting\n * struct must be freed using av_packet_free().\n *\n * @return An AVPacket filled with default values or NULL on failure.\n *\n * @note this only allocates the AVPacket itself, not the data buffers. Those\n * must be allocated through other means such as av_new_packet.\n *\n * @see av_new_packet\n */\nAVPacket *av_packet_alloc(void);\n\n/**\n * Create a new packet that references the same data as src.\n *\n * This is a shortcut for av_packet_alloc()+av_packet_ref().\n *\n * @return newly created AVPacket on success, NULL on error.\n *\n * @see av_packet_alloc\n * @see av_packet_ref\n */\nAVPacket *av_packet_clone(const AVPacket *src);\n\n/**\n * Free the packet, if the packet is reference counted, it will be\n * unreferenced first.\n *\n * @param pkt packet to be freed. The pointer will be set to NULL.\n * @note passing NULL is a no-op.\n */\nvoid av_packet_free(AVPacket **pkt);\n\n/**\n * Initialize optional fields of a packet with default values.\n *\n * Note, this does not touch the data and size members, which have to be\n * initialized separately.\n *\n * @param pkt packet\n */\nvoid av_init_packet(AVPacket *pkt);\n\n/**\n * Allocate the payload of a packet and initialize its fields with\n * default values.\n *\n * @param pkt packet\n * @param size wanted payload size\n * @return 0 if OK, AVERROR_xxx otherwise\n */\nint av_new_packet(AVPacket *pkt, int size);\n\n/**\n * Reduce packet size, correctly zeroing padding\n *\n * @param pkt packet\n * @param size new size\n */\nvoid av_shrink_packet(AVPacket *pkt, int size);\n\n/**\n * Increase packet size, correctly zeroing padding\n *\n * @param pkt packet\n * @param grow_by number of bytes by which to increase the size of the packet\n */\nint av_grow_packet(AVPacket *pkt, int grow_by);\n\n/**\n * Initialize a reference-counted packet from av_malloc()ed data.\n *\n * @param pkt packet to be initialized. This function will set the data, size,\n *        buf and destruct fields, all others are left untouched.\n * @param data Data allocated by av_malloc() to be used as packet data. If this\n *        function returns successfully, the data is owned by the underlying AVBuffer.\n *        The caller may not access the data through other means.\n * @param size size of data in bytes, without the padding. I.e. the full buffer\n *        size is assumed to be size + AV_INPUT_BUFFER_PADDING_SIZE.\n *\n * @return 0 on success, a negative AVERROR on error\n */\nint av_packet_from_data(AVPacket *pkt, uint8_t *data, int size);\n\n#if FF_API_AVPACKET_OLD_API\n/**\n * @warning This is a hack - the packet memory allocation stuff is broken. The\n * packet is allocated if it was not really allocated.\n *\n * @deprecated Use av_packet_ref or av_packet_make_refcounted\n */\nattribute_deprecated\nint av_dup_packet(AVPacket *pkt);\n/**\n * Copy packet, including contents\n *\n * @return 0 on success, negative AVERROR on fail\n *\n * @deprecated Use av_packet_ref\n */\nattribute_deprecated\nint av_copy_packet(AVPacket *dst, const AVPacket *src);\n\n/**\n * Copy packet side data\n *\n * @return 0 on success, negative AVERROR on fail\n *\n * @deprecated Use av_packet_copy_props\n */\nattribute_deprecated\nint av_copy_packet_side_data(AVPacket *dst, const AVPacket *src);\n\n/**\n * Free a packet.\n *\n * @deprecated Use av_packet_unref\n *\n * @param pkt packet to free\n */\nattribute_deprecated\nvoid av_free_packet(AVPacket *pkt);\n#endif\n/**\n * Allocate new information of a packet.\n *\n * @param pkt packet\n * @param type side information type\n * @param size side information size\n * @return pointer to fresh allocated data or NULL otherwise\n */\nuint8_t* av_packet_new_side_data(AVPacket *pkt, enum AVPacketSideDataType type,\n                                 int size);\n\n/**\n * Wrap an existing array as a packet side data.\n *\n * @param pkt packet\n * @param type side information type\n * @param data the side data array. It must be allocated with the av_malloc()\n *             family of functions. The ownership of the data is transferred to\n *             pkt.\n * @param size side information size\n * @return a non-negative number on success, a negative AVERROR code on\n *         failure. On failure, the packet is unchanged and the data remains\n *         owned by the caller.\n */\nint av_packet_add_side_data(AVPacket *pkt, enum AVPacketSideDataType type,\n                            uint8_t *data, size_t size);\n\n/**\n * Shrink the already allocated side data buffer\n *\n * @param pkt packet\n * @param type side information type\n * @param size new side information size\n * @return 0 on success, < 0 on failure\n */\nint av_packet_shrink_side_data(AVPacket *pkt, enum AVPacketSideDataType type,\n                               int size);\n\n/**\n * Get side information from packet.\n *\n * @param pkt packet\n * @param type desired side information type\n * @param size pointer for side information size to store (optional)\n * @return pointer to data if present or NULL otherwise\n */\nuint8_t* av_packet_get_side_data(const AVPacket *pkt, enum AVPacketSideDataType type,\n                                 int *size);\n\n#if FF_API_MERGE_SD_API\nattribute_deprecated\nint av_packet_merge_side_data(AVPacket *pkt);\n\nattribute_deprecated\nint av_packet_split_side_data(AVPacket *pkt);\n#endif\n\nconst char *av_packet_side_data_name(enum AVPacketSideDataType type);\n\n/**\n * Pack a dictionary for use in side_data.\n *\n * @param dict The dictionary to pack.\n * @param size pointer to store the size of the returned data\n * @return pointer to data if successful, NULL otherwise\n */\nuint8_t *av_packet_pack_dictionary(AVDictionary *dict, int *size);\n/**\n * Unpack a dictionary from side_data.\n *\n * @param data data from side_data\n * @param size size of the data\n * @param dict the metadata storage dictionary\n * @return 0 on success, < 0 on failure\n */\nint av_packet_unpack_dictionary(const uint8_t *data, int size, AVDictionary **dict);\n\n\n/**\n * Convenience function to free all the side data stored.\n * All the other fields stay untouched.\n *\n * @param pkt packet\n */\nvoid av_packet_free_side_data(AVPacket *pkt);\n\n/**\n * Setup a new reference to the data described by a given packet\n *\n * If src is reference-counted, setup dst as a new reference to the\n * buffer in src. Otherwise allocate a new buffer in dst and copy the\n * data from src into it.\n *\n * All the other fields are copied from src.\n *\n * @see av_packet_unref\n *\n * @param dst Destination packet\n * @param src Source packet\n *\n * @return 0 on success, a negative AVERROR on error.\n */\nint av_packet_ref(AVPacket *dst, const AVPacket *src);\n\n/**\n * Wipe the packet.\n *\n * Unreference the buffer referenced by the packet and reset the\n * remaining packet fields to their default values.\n *\n * @param pkt The packet to be unreferenced.\n */\nvoid av_packet_unref(AVPacket *pkt);\n\n/**\n * Move every field in src to dst and reset src.\n *\n * @see av_packet_unref\n *\n * @param src Source packet, will be reset\n * @param dst Destination packet\n */\nvoid av_packet_move_ref(AVPacket *dst, AVPacket *src);\n\n/**\n * Copy only \"properties\" fields from src to dst.\n *\n * Properties for the purpose of this function are all the fields\n * beside those related to the packet data (buf, data, size)\n *\n * @param dst Destination packet\n * @param src Source packet\n *\n * @return 0 on success AVERROR on failure.\n */\nint av_packet_copy_props(AVPacket *dst, const AVPacket *src);\n\n/**\n * Ensure the data described by a given packet is reference counted.\n *\n * @note This function does not ensure that the reference will be writable.\n *       Use av_packet_make_writable instead for that purpose.\n *\n * @see av_packet_ref\n * @see av_packet_make_writable\n *\n * @param pkt packet whose data should be made reference counted.\n *\n * @return 0 on success, a negative AVERROR on error. On failure, the\n *         packet is unchanged.\n */\nint av_packet_make_refcounted(AVPacket *pkt);\n\n/**\n * Create a writable reference for the data described by a given packet,\n * avoiding data copy if possible.\n *\n * @param pkt Packet whose data should be made writable.\n *\n * @return 0 on success, a negative AVERROR on failure. On failure, the\n *         packet is unchanged.\n */\nint av_packet_make_writable(AVPacket *pkt);\n\n/**\n * Convert valid timing fields (timestamps / durations) in a packet from one\n * timebase to another. Timestamps with unknown values (AV_NOPTS_VALUE) will be\n * ignored.\n *\n * @param pkt packet on which the conversion will be performed\n * @param tb_src source timebase, in which the timing fields in pkt are\n *               expressed\n * @param tb_dst destination timebase, to which the timing fields will be\n *               converted\n */\nvoid av_packet_rescale_ts(AVPacket *pkt, AVRational tb_src, AVRational tb_dst);\n\n/**\n * @}\n */\n\n/**\n * @addtogroup lavc_decoding\n * @{\n */\n\n/**\n * Find a registered decoder with a matching codec ID.\n *\n * @param id AVCodecID of the requested decoder\n * @return A decoder if one was found, NULL otherwise.\n */\nAVCodec *avcodec_find_decoder(enum AVCodecID id);\n\n/**\n * Find a registered decoder with the specified name.\n *\n * @param name name of the requested decoder\n * @return A decoder if one was found, NULL otherwise.\n */\nAVCodec *avcodec_find_decoder_by_name(const char *name);\n\n/**\n * The default callback for AVCodecContext.get_buffer2(). It is made public so\n * it can be called by custom get_buffer2() implementations for decoders without\n * AV_CODEC_CAP_DR1 set.\n */\nint avcodec_default_get_buffer2(AVCodecContext *s, AVFrame *frame, int flags);\n\n/**\n * Modify width and height values so that they will result in a memory\n * buffer that is acceptable for the codec if you do not use any horizontal\n * padding.\n *\n * May only be used if a codec with AV_CODEC_CAP_DR1 has been opened.\n */\nvoid avcodec_align_dimensions(AVCodecContext *s, int *width, int *height);\n\n/**\n * Modify width and height values so that they will result in a memory\n * buffer that is acceptable for the codec if you also ensure that all\n * line sizes are a multiple of the respective linesize_align[i].\n *\n * May only be used if a codec with AV_CODEC_CAP_DR1 has been opened.\n */\nvoid avcodec_align_dimensions2(AVCodecContext *s, int *width, int *height,\n                               int linesize_align[AV_NUM_DATA_POINTERS]);\n\n/**\n * Converts AVChromaLocation to swscale x/y chroma position.\n *\n * The positions represent the chroma (0,0) position in a coordinates system\n * with luma (0,0) representing the origin and luma(1,1) representing 256,256\n *\n * @param xpos  horizontal chroma sample position\n * @param ypos  vertical   chroma sample position\n */\nint avcodec_enum_to_chroma_pos(int *xpos, int *ypos, enum AVChromaLocation pos);\n\n/**\n * Converts swscale x/y chroma position to AVChromaLocation.\n *\n * The positions represent the chroma (0,0) position in a coordinates system\n * with luma (0,0) representing the origin and luma(1,1) representing 256,256\n *\n * @param xpos  horizontal chroma sample position\n * @param ypos  vertical   chroma sample position\n */\nenum AVChromaLocation avcodec_chroma_pos_to_enum(int xpos, int ypos);\n\n/**\n * Decode the audio frame of size avpkt->size from avpkt->data into frame.\n *\n * Some decoders may support multiple frames in a single AVPacket. Such\n * decoders would then just decode the first frame and the return value would be\n * less than the packet size. In this case, avcodec_decode_audio4 has to be\n * called again with an AVPacket containing the remaining data in order to\n * decode the second frame, etc...  Even if no frames are returned, the packet\n * needs to be fed to the decoder with remaining data until it is completely\n * consumed or an error occurs.\n *\n * Some decoders (those marked with AV_CODEC_CAP_DELAY) have a delay between input\n * and output. This means that for some packets they will not immediately\n * produce decoded output and need to be flushed at the end of decoding to get\n * all the decoded data. Flushing is done by calling this function with packets\n * with avpkt->data set to NULL and avpkt->size set to 0 until it stops\n * returning samples. It is safe to flush even those decoders that are not\n * marked with AV_CODEC_CAP_DELAY, then no samples will be returned.\n *\n * @warning The input buffer, avpkt->data must be AV_INPUT_BUFFER_PADDING_SIZE\n *          larger than the actual read bytes because some optimized bitstream\n *          readers read 32 or 64 bits at once and could read over the end.\n *\n * @note The AVCodecContext MUST have been opened with @ref avcodec_open2()\n * before packets may be fed to the decoder.\n *\n * @param      avctx the codec context\n * @param[out] frame The AVFrame in which to store decoded audio samples.\n *                   The decoder will allocate a buffer for the decoded frame by\n *                   calling the AVCodecContext.get_buffer2() callback.\n *                   When AVCodecContext.refcounted_frames is set to 1, the frame is\n *                   reference counted and the returned reference belongs to the\n *                   caller. The caller must release the frame using av_frame_unref()\n *                   when the frame is no longer needed. The caller may safely write\n *                   to the frame if av_frame_is_writable() returns 1.\n *                   When AVCodecContext.refcounted_frames is set to 0, the returned\n *                   reference belongs to the decoder and is valid only until the\n *                   next call to this function or until closing or flushing the\n *                   decoder. The caller may not write to it.\n * @param[out] got_frame_ptr Zero if no frame could be decoded, otherwise it is\n *                           non-zero. Note that this field being set to zero\n *                           does not mean that an error has occurred. For\n *                           decoders with AV_CODEC_CAP_DELAY set, no given decode\n *                           call is guaranteed to produce a frame.\n * @param[in]  avpkt The input AVPacket containing the input buffer.\n *                   At least avpkt->data and avpkt->size should be set. Some\n *                   decoders might also require additional fields to be set.\n * @return A negative error code is returned if an error occurred during\n *         decoding, otherwise the number of bytes consumed from the input\n *         AVPacket is returned.\n *\n* @deprecated Use avcodec_send_packet() and avcodec_receive_frame().\n */\nattribute_deprecated\nint avcodec_decode_audio4(AVCodecContext *avctx, AVFrame *frame,\n                          int *got_frame_ptr, const AVPacket *avpkt);\n\n/**\n * Decode the video frame of size avpkt->size from avpkt->data into picture.\n * Some decoders may support multiple frames in a single AVPacket, such\n * decoders would then just decode the first frame.\n *\n * @warning The input buffer must be AV_INPUT_BUFFER_PADDING_SIZE larger than\n * the actual read bytes because some optimized bitstream readers read 32 or 64\n * bits at once and could read over the end.\n *\n * @warning The end of the input buffer buf should be set to 0 to ensure that\n * no overreading happens for damaged MPEG streams.\n *\n * @note Codecs which have the AV_CODEC_CAP_DELAY capability set have a delay\n * between input and output, these need to be fed with avpkt->data=NULL,\n * avpkt->size=0 at the end to return the remaining frames.\n *\n * @note The AVCodecContext MUST have been opened with @ref avcodec_open2()\n * before packets may be fed to the decoder.\n *\n * @param avctx the codec context\n * @param[out] picture The AVFrame in which the decoded video frame will be stored.\n *             Use av_frame_alloc() to get an AVFrame. The codec will\n *             allocate memory for the actual bitmap by calling the\n *             AVCodecContext.get_buffer2() callback.\n *             When AVCodecContext.refcounted_frames is set to 1, the frame is\n *             reference counted and the returned reference belongs to the\n *             caller. The caller must release the frame using av_frame_unref()\n *             when the frame is no longer needed. The caller may safely write\n *             to the frame if av_frame_is_writable() returns 1.\n *             When AVCodecContext.refcounted_frames is set to 0, the returned\n *             reference belongs to the decoder and is valid only until the\n *             next call to this function or until closing or flushing the\n *             decoder. The caller may not write to it.\n *\n * @param[in] avpkt The input AVPacket containing the input buffer.\n *            You can create such packet with av_init_packet() and by then setting\n *            data and size, some decoders might in addition need other fields like\n *            flags&AV_PKT_FLAG_KEY. All decoders are designed to use the least\n *            fields possible.\n * @param[in,out] got_picture_ptr Zero if no frame could be decompressed, otherwise, it is nonzero.\n * @return On error a negative value is returned, otherwise the number of bytes\n * used or zero if no frame could be decompressed.\n *\n * @deprecated Use avcodec_send_packet() and avcodec_receive_frame().\n */\nattribute_deprecated\nint avcodec_decode_video2(AVCodecContext *avctx, AVFrame *picture,\n                         int *got_picture_ptr,\n                         const AVPacket *avpkt);\n\n/**\n * Decode a subtitle message.\n * Return a negative value on error, otherwise return the number of bytes used.\n * If no subtitle could be decompressed, got_sub_ptr is zero.\n * Otherwise, the subtitle is stored in *sub.\n * Note that AV_CODEC_CAP_DR1 is not available for subtitle codecs. This is for\n * simplicity, because the performance difference is expect to be negligible\n * and reusing a get_buffer written for video codecs would probably perform badly\n * due to a potentially very different allocation pattern.\n *\n * Some decoders (those marked with AV_CODEC_CAP_DELAY) have a delay between input\n * and output. This means that for some packets they will not immediately\n * produce decoded output and need to be flushed at the end of decoding to get\n * all the decoded data. Flushing is done by calling this function with packets\n * with avpkt->data set to NULL and avpkt->size set to 0 until it stops\n * returning subtitles. It is safe to flush even those decoders that are not\n * marked with AV_CODEC_CAP_DELAY, then no subtitles will be returned.\n *\n * @note The AVCodecContext MUST have been opened with @ref avcodec_open2()\n * before packets may be fed to the decoder.\n *\n * @param avctx the codec context\n * @param[out] sub The Preallocated AVSubtitle in which the decoded subtitle will be stored,\n *                 must be freed with avsubtitle_free if *got_sub_ptr is set.\n * @param[in,out] got_sub_ptr Zero if no subtitle could be decompressed, otherwise, it is nonzero.\n * @param[in] avpkt The input AVPacket containing the input buffer.\n */\nint avcodec_decode_subtitle2(AVCodecContext *avctx, AVSubtitle *sub,\n                            int *got_sub_ptr,\n                            AVPacket *avpkt);\n\n/**\n * Supply raw packet data as input to a decoder.\n *\n * Internally, this call will copy relevant AVCodecContext fields, which can\n * influence decoding per-packet, and apply them when the packet is actually\n * decoded. (For example AVCodecContext.skip_frame, which might direct the\n * decoder to drop the frame contained by the packet sent with this function.)\n *\n * @warning The input buffer, avpkt->data must be AV_INPUT_BUFFER_PADDING_SIZE\n *          larger than the actual read bytes because some optimized bitstream\n *          readers read 32 or 64 bits at once and could read over the end.\n *\n * @warning Do not mix this API with the legacy API (like avcodec_decode_video2())\n *          on the same AVCodecContext. It will return unexpected results now\n *          or in future libavcodec versions.\n *\n * @note The AVCodecContext MUST have been opened with @ref avcodec_open2()\n *       before packets may be fed to the decoder.\n *\n * @param avctx codec context\n * @param[in] avpkt The input AVPacket. Usually, this will be a single video\n *                  frame, or several complete audio frames.\n *                  Ownership of the packet remains with the caller, and the\n *                  decoder will not write to the packet. The decoder may create\n *                  a reference to the packet data (or copy it if the packet is\n *                  not reference-counted).\n *                  Unlike with older APIs, the packet is always fully consumed,\n *                  and if it contains multiple frames (e.g. some audio codecs),\n *                  will require you to call avcodec_receive_frame() multiple\n *                  times afterwards before you can send a new packet.\n *                  It can be NULL (or an AVPacket with data set to NULL and\n *                  size set to 0); in this case, it is considered a flush\n *                  packet, which signals the end of the stream. Sending the\n *                  first flush packet will return success. Subsequent ones are\n *                  unnecessary and will return AVERROR_EOF. If the decoder\n *                  still has frames buffered, it will return them after sending\n *                  a flush packet.\n *\n * @return 0 on success, otherwise negative error code:\n *      AVERROR(EAGAIN):   input is not accepted in the current state - user\n *                         must read output with avcodec_receive_frame() (once\n *                         all output is read, the packet should be resent, and\n *                         the call will not fail with EAGAIN).\n *      AVERROR_EOF:       the decoder has been flushed, and no new packets can\n *                         be sent to it (also returned if more than 1 flush\n *                         packet is sent)\n *      AVERROR(EINVAL):   codec not opened, it is an encoder, or requires flush\n *      AVERROR(ENOMEM):   failed to add packet to internal queue, or similar\n *      other errors: legitimate decoding errors\n */\nint avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt);\n\n/**\n * Return decoded output data from a decoder.\n *\n * @param avctx codec context\n * @param frame This will be set to a reference-counted video or audio\n *              frame (depending on the decoder type) allocated by the\n *              decoder. Note that the function will always call\n *              av_frame_unref(frame) before doing anything else.\n *\n * @return\n *      0:                 success, a frame was returned\n *      AVERROR(EAGAIN):   output is not available in this state - user must try\n *                         to send new input\n *      AVERROR_EOF:       the decoder has been fully flushed, and there will be\n *                         no more output frames\n *      AVERROR(EINVAL):   codec not opened, or it is an encoder\n *      other negative values: legitimate decoding errors\n */\nint avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame);\n\n/**\n * Supply a raw video or audio frame to the encoder. Use avcodec_receive_packet()\n * to retrieve buffered output packets.\n *\n * @param avctx     codec context\n * @param[in] frame AVFrame containing the raw audio or video frame to be encoded.\n *                  Ownership of the frame remains with the caller, and the\n *                  encoder will not write to the frame. The encoder may create\n *                  a reference to the frame data (or copy it if the frame is\n *                  not reference-counted).\n *                  It can be NULL, in which case it is considered a flush\n *                  packet.  This signals the end of the stream. If the encoder\n *                  still has packets buffered, it will return them after this\n *                  call. Once flushing mode has been entered, additional flush\n *                  packets are ignored, and sending frames will return\n *                  AVERROR_EOF.\n *\n *                  For audio:\n *                  If AV_CODEC_CAP_VARIABLE_FRAME_SIZE is set, then each frame\n *                  can have any number of samples.\n *                  If it is not set, frame->nb_samples must be equal to\n *                  avctx->frame_size for all frames except the last.\n *                  The final frame may be smaller than avctx->frame_size.\n * @return 0 on success, otherwise negative error code:\n *      AVERROR(EAGAIN):   input is not accepted in the current state - user\n *                         must read output with avcodec_receive_packet() (once\n *                         all output is read, the packet should be resent, and\n *                         the call will not fail with EAGAIN).\n *      AVERROR_EOF:       the encoder has been flushed, and no new frames can\n *                         be sent to it\n *      AVERROR(EINVAL):   codec not opened, refcounted_frames not set, it is a\n *                         decoder, or requires flush\n *      AVERROR(ENOMEM):   failed to add packet to internal queue, or similar\n *      other errors: legitimate decoding errors\n */\nint avcodec_send_frame(AVCodecContext *avctx, const AVFrame *frame);\n\n/**\n * Read encoded data from the encoder.\n *\n * @param avctx codec context\n * @param avpkt This will be set to a reference-counted packet allocated by the\n *              encoder. Note that the function will always call\n *              av_frame_unref(frame) before doing anything else.\n * @return 0 on success, otherwise negative error code:\n *      AVERROR(EAGAIN):   output is not available in the current state - user\n *                         must try to send input\n *      AVERROR_EOF:       the encoder has been fully flushed, and there will be\n *                         no more output packets\n *      AVERROR(EINVAL):   codec not opened, or it is an encoder\n *      other errors: legitimate decoding errors\n */\nint avcodec_receive_packet(AVCodecContext *avctx, AVPacket *avpkt);\n\n/**\n * Create and return a AVHWFramesContext with values adequate for hardware\n * decoding. This is meant to get called from the get_format callback, and is\n * a helper for preparing a AVHWFramesContext for AVCodecContext.hw_frames_ctx.\n * This API is for decoding with certain hardware acceleration modes/APIs only.\n *\n * The returned AVHWFramesContext is not initialized. The caller must do this\n * with av_hwframe_ctx_init().\n *\n * Calling this function is not a requirement, but makes it simpler to avoid\n * codec or hardware API specific details when manually allocating frames.\n *\n * Alternatively to this, an API user can set AVCodecContext.hw_device_ctx,\n * which sets up AVCodecContext.hw_frames_ctx fully automatically, and makes\n * it unnecessary to call this function or having to care about\n * AVHWFramesContext initialization at all.\n *\n * There are a number of requirements for calling this function:\n *\n * - It must be called from get_format with the same avctx parameter that was\n *   passed to get_format. Calling it outside of get_format is not allowed, and\n *   can trigger undefined behavior.\n * - The function is not always supported (see description of return values).\n *   Even if this function returns successfully, hwaccel initialization could\n *   fail later. (The degree to which implementations check whether the stream\n *   is actually supported varies. Some do this check only after the user's\n *   get_format callback returns.)\n * - The hw_pix_fmt must be one of the choices suggested by get_format. If the\n *   user decides to use a AVHWFramesContext prepared with this API function,\n *   the user must return the same hw_pix_fmt from get_format.\n * - The device_ref passed to this function must support the given hw_pix_fmt.\n * - After calling this API function, it is the user's responsibility to\n *   initialize the AVHWFramesContext (returned by the out_frames_ref parameter),\n *   and to set AVCodecContext.hw_frames_ctx to it. If done, this must be done\n *   before returning from get_format (this is implied by the normal\n *   AVCodecContext.hw_frames_ctx API rules).\n * - The AVHWFramesContext parameters may change every time time get_format is\n *   called. Also, AVCodecContext.hw_frames_ctx is reset before get_format. So\n *   you are inherently required to go through this process again on every\n *   get_format call.\n * - It is perfectly possible to call this function without actually using\n *   the resulting AVHWFramesContext. One use-case might be trying to reuse a\n *   previously initialized AVHWFramesContext, and calling this API function\n *   only to test whether the required frame parameters have changed.\n * - Fields that use dynamically allocated values of any kind must not be set\n *   by the user unless setting them is explicitly allowed by the documentation.\n *   If the user sets AVHWFramesContext.free and AVHWFramesContext.user_opaque,\n *   the new free callback must call the potentially set previous free callback.\n *   This API call may set any dynamically allocated fields, including the free\n *   callback.\n *\n * The function will set at least the following fields on AVHWFramesContext\n * (potentially more, depending on hwaccel API):\n *\n * - All fields set by av_hwframe_ctx_alloc().\n * - Set the format field to hw_pix_fmt.\n * - Set the sw_format field to the most suited and most versatile format. (An\n *   implication is that this will prefer generic formats over opaque formats\n *   with arbitrary restrictions, if possible.)\n * - Set the width/height fields to the coded frame size, rounded up to the\n *   API-specific minimum alignment.\n * - Only _if_ the hwaccel requires a pre-allocated pool: set the initial_pool_size\n *   field to the number of maximum reference surfaces possible with the codec,\n *   plus 1 surface for the user to work (meaning the user can safely reference\n *   at most 1 decoded surface at a time), plus additional buffering introduced\n *   by frame threading. If the hwaccel does not require pre-allocation, the\n *   field is left to 0, and the decoder will allocate new surfaces on demand\n *   during decoding.\n * - Possibly AVHWFramesContext.hwctx fields, depending on the underlying\n *   hardware API.\n *\n * Essentially, out_frames_ref returns the same as av_hwframe_ctx_alloc(), but\n * with basic frame parameters set.\n *\n * The function is stateless, and does not change the AVCodecContext or the\n * device_ref AVHWDeviceContext.\n *\n * @param avctx The context which is currently calling get_format, and which\n *              implicitly contains all state needed for filling the returned\n *              AVHWFramesContext properly.\n * @param device_ref A reference to the AVHWDeviceContext describing the device\n *                   which will be used by the hardware decoder.\n * @param hw_pix_fmt The hwaccel format you are going to return from get_format.\n * @param out_frames_ref On success, set to a reference to an _uninitialized_\n *                       AVHWFramesContext, created from the given device_ref.\n *                       Fields will be set to values required for decoding.\n *                       Not changed if an error is returned.\n * @return zero on success, a negative value on error. The following error codes\n *         have special semantics:\n *      AVERROR(ENOENT): the decoder does not support this functionality. Setup\n *                       is always manual, or it is a decoder which does not\n *                       support setting AVCodecContext.hw_frames_ctx at all,\n *                       or it is a software format.\n *      AVERROR(EINVAL): it is known that hardware decoding is not supported for\n *                       this configuration, or the device_ref is not supported\n *                       for the hwaccel referenced by hw_pix_fmt.\n */\nint avcodec_get_hw_frames_parameters(AVCodecContext *avctx,\n                                     AVBufferRef *device_ref,\n                                     enum AVPixelFormat hw_pix_fmt,\n                                     AVBufferRef **out_frames_ref);\n\n\n\n/**\n * @defgroup lavc_parsing Frame parsing\n * @{\n */\n\nenum AVPictureStructure {\n    AV_PICTURE_STRUCTURE_UNKNOWN,      //< unknown\n    AV_PICTURE_STRUCTURE_TOP_FIELD,    //< coded as top field\n    AV_PICTURE_STRUCTURE_BOTTOM_FIELD, //< coded as bottom field\n    AV_PICTURE_STRUCTURE_FRAME,        //< coded as frame\n};\n\ntypedef struct AVCodecParserContext {\n    void *priv_data;\n    struct AVCodecParser *parser;\n    int64_t frame_offset; /* offset of the current frame */\n    int64_t cur_offset; /* current offset\n                           (incremented by each av_parser_parse()) */\n    int64_t next_frame_offset; /* offset of the next frame */\n    /* video info */\n    int pict_type; /* XXX: Put it back in AVCodecContext. */\n    /**\n     * This field is used for proper frame duration computation in lavf.\n     * It signals, how much longer the frame duration of the current frame\n     * is compared to normal frame duration.\n     *\n     * frame_duration = (1 + repeat_pict) * time_base\n     *\n     * It is used by codecs like H.264 to display telecined material.\n     */\n    int repeat_pict; /* XXX: Put it back in AVCodecContext. */\n    int64_t pts;     /* pts of the current frame */\n    int64_t dts;     /* dts of the current frame */\n\n    /* private data */\n    int64_t last_pts;\n    int64_t last_dts;\n    int fetch_timestamp;\n\n#define AV_PARSER_PTS_NB 4\n    int cur_frame_start_index;\n    int64_t cur_frame_offset[AV_PARSER_PTS_NB];\n    int64_t cur_frame_pts[AV_PARSER_PTS_NB];\n    int64_t cur_frame_dts[AV_PARSER_PTS_NB];\n\n    int flags;\n#define PARSER_FLAG_COMPLETE_FRAMES           0x0001\n#define PARSER_FLAG_ONCE                      0x0002\n/// Set if the parser has a valid file offset\n#define PARSER_FLAG_FETCHED_OFFSET            0x0004\n#define PARSER_FLAG_USE_CODEC_TS              0x1000\n\n    int64_t offset;      ///< byte offset from starting packet start\n    int64_t cur_frame_end[AV_PARSER_PTS_NB];\n\n    /**\n     * Set by parser to 1 for key frames and 0 for non-key frames.\n     * It is initialized to -1, so if the parser doesn't set this flag,\n     * old-style fallback using AV_PICTURE_TYPE_I picture type as key frames\n     * will be used.\n     */\n    int key_frame;\n\n#if FF_API_CONVERGENCE_DURATION\n    /**\n     * @deprecated unused\n     */\n    attribute_deprecated\n    int64_t convergence_duration;\n#endif\n\n    // Timestamp generation support:\n    /**\n     * Synchronization point for start of timestamp generation.\n     *\n     * Set to >0 for sync point, 0 for no sync point and <0 for undefined\n     * (default).\n     *\n     * For example, this corresponds to presence of H.264 buffering period\n     * SEI message.\n     */\n    int dts_sync_point;\n\n    /**\n     * Offset of the current timestamp against last timestamp sync point in\n     * units of AVCodecContext.time_base.\n     *\n     * Set to INT_MIN when dts_sync_point unused. Otherwise, it must\n     * contain a valid timestamp offset.\n     *\n     * Note that the timestamp of sync point has usually a nonzero\n     * dts_ref_dts_delta, which refers to the previous sync point. Offset of\n     * the next frame after timestamp sync point will be usually 1.\n     *\n     * For example, this corresponds to H.264 cpb_removal_delay.\n     */\n    int dts_ref_dts_delta;\n\n    /**\n     * Presentation delay of current frame in units of AVCodecContext.time_base.\n     *\n     * Set to INT_MIN when dts_sync_point unused. Otherwise, it must\n     * contain valid non-negative timestamp delta (presentation time of a frame\n     * must not lie in the past).\n     *\n     * This delay represents the difference between decoding and presentation\n     * time of the frame.\n     *\n     * For example, this corresponds to H.264 dpb_output_delay.\n     */\n    int pts_dts_delta;\n\n    /**\n     * Position of the packet in file.\n     *\n     * Analogous to cur_frame_pts/dts\n     */\n    int64_t cur_frame_pos[AV_PARSER_PTS_NB];\n\n    /**\n     * Byte position of currently parsed frame in stream.\n     */\n    int64_t pos;\n\n    /**\n     * Previous frame byte position.\n     */\n    int64_t last_pos;\n\n    /**\n     * Duration of the current frame.\n     * For audio, this is in units of 1 / AVCodecContext.sample_rate.\n     * For all other types, this is in units of AVCodecContext.time_base.\n     */\n    int duration;\n\n    enum AVFieldOrder field_order;\n\n    /**\n     * Indicate whether a picture is coded as a frame, top field or bottom field.\n     *\n     * For example, H.264 field_pic_flag equal to 0 corresponds to\n     * AV_PICTURE_STRUCTURE_FRAME. An H.264 picture with field_pic_flag\n     * equal to 1 and bottom_field_flag equal to 0 corresponds to\n     * AV_PICTURE_STRUCTURE_TOP_FIELD.\n     */\n    enum AVPictureStructure picture_structure;\n\n    /**\n     * Picture number incremented in presentation or output order.\n     * This field may be reinitialized at the first picture of a new sequence.\n     *\n     * For example, this corresponds to H.264 PicOrderCnt.\n     */\n    int output_picture_number;\n\n    /**\n     * Dimensions of the decoded video intended for presentation.\n     */\n    int width;\n    int height;\n\n    /**\n     * Dimensions of the coded video.\n     */\n    int coded_width;\n    int coded_height;\n\n    /**\n     * The format of the coded data, corresponds to enum AVPixelFormat for video\n     * and for enum AVSampleFormat for audio.\n     *\n     * Note that a decoder can have considerable freedom in how exactly it\n     * decodes the data, so the format reported here might be different from the\n     * one returned by a decoder.\n     */\n    int format;\n} AVCodecParserContext;\n\ntypedef struct AVCodecParser {\n    int codec_ids[5]; /* several codec IDs are permitted */\n    int priv_data_size;\n    int (*parser_init)(AVCodecParserContext *s);\n    /* This callback never returns an error, a negative value means that\n     * the frame start was in a previous packet. */\n    int (*parser_parse)(AVCodecParserContext *s,\n                        AVCodecContext *avctx,\n                        const uint8_t **poutbuf, int *poutbuf_size,\n                        const uint8_t *buf, int buf_size);\n    void (*parser_close)(AVCodecParserContext *s);\n    int (*split)(AVCodecContext *avctx, const uint8_t *buf, int buf_size);\n    struct AVCodecParser *next;\n} AVCodecParser;\n\n/**\n * Iterate over all registered codec parsers.\n *\n * @param opaque a pointer where libavcodec will store the iteration state. Must\n *               point to NULL to start the iteration.\n *\n * @return the next registered codec parser or NULL when the iteration is\n *         finished\n */\nconst AVCodecParser *av_parser_iterate(void **opaque);\n\nattribute_deprecated\nAVCodecParser *av_parser_next(const AVCodecParser *c);\n\nattribute_deprecated\nvoid av_register_codec_parser(AVCodecParser *parser);\nAVCodecParserContext *av_parser_init(int codec_id);\n\n/**\n * Parse a packet.\n *\n * @param s             parser context.\n * @param avctx         codec context.\n * @param poutbuf       set to pointer to parsed buffer or NULL if not yet finished.\n * @param poutbuf_size  set to size of parsed buffer or zero if not yet finished.\n * @param buf           input buffer.\n * @param buf_size      buffer size in bytes without the padding. I.e. the full buffer\n                        size is assumed to be buf_size + AV_INPUT_BUFFER_PADDING_SIZE.\n                        To signal EOF, this should be 0 (so that the last frame\n                        can be output).\n * @param pts           input presentation timestamp.\n * @param dts           input decoding timestamp.\n * @param pos           input byte position in stream.\n * @return the number of bytes of the input bitstream used.\n *\n * Example:\n * @code\n *   while(in_len){\n *       len = av_parser_parse2(myparser, AVCodecContext, &data, &size,\n *                                        in_data, in_len,\n *                                        pts, dts, pos);\n *       in_data += len;\n *       in_len  -= len;\n *\n *       if(size)\n *          decode_frame(data, size);\n *   }\n * @endcode\n */\nint av_parser_parse2(AVCodecParserContext *s,\n                     AVCodecContext *avctx,\n                     uint8_t **poutbuf, int *poutbuf_size,\n                     const uint8_t *buf, int buf_size,\n                     int64_t pts, int64_t dts,\n                     int64_t pos);\n\n/**\n * @return 0 if the output buffer is a subset of the input, 1 if it is allocated and must be freed\n * @deprecated use AVBitStreamFilter\n */\nint av_parser_change(AVCodecParserContext *s,\n                     AVCodecContext *avctx,\n                     uint8_t **poutbuf, int *poutbuf_size,\n                     const uint8_t *buf, int buf_size, int keyframe);\nvoid av_parser_close(AVCodecParserContext *s);\n\n/**\n * @}\n * @}\n */\n\n/**\n * @addtogroup lavc_encoding\n * @{\n */\n\n/**\n * Find a registered encoder with a matching codec ID.\n *\n * @param id AVCodecID of the requested encoder\n * @return An encoder if one was found, NULL otherwise.\n */\nAVCodec *avcodec_find_encoder(enum AVCodecID id);\n\n/**\n * Find a registered encoder with the specified name.\n *\n * @param name name of the requested encoder\n * @return An encoder if one was found, NULL otherwise.\n */\nAVCodec *avcodec_find_encoder_by_name(const char *name);\n\n/**\n * Encode a frame of audio.\n *\n * Takes input samples from frame and writes the next output packet, if\n * available, to avpkt. The output packet does not necessarily contain data for\n * the most recent frame, as encoders can delay, split, and combine input frames\n * internally as needed.\n *\n * @param avctx     codec context\n * @param avpkt     output AVPacket.\n *                  The user can supply an output buffer by setting\n *                  avpkt->data and avpkt->size prior to calling the\n *                  function, but if the size of the user-provided data is not\n *                  large enough, encoding will fail. If avpkt->data and\n *                  avpkt->size are set, avpkt->destruct must also be set. All\n *                  other AVPacket fields will be reset by the encoder using\n *                  av_init_packet(). If avpkt->data is NULL, the encoder will\n *                  allocate it. The encoder will set avpkt->size to the size\n *                  of the output packet.\n *\n *                  If this function fails or produces no output, avpkt will be\n *                  freed using av_packet_unref().\n * @param[in] frame AVFrame containing the raw audio data to be encoded.\n *                  May be NULL when flushing an encoder that has the\n *                  AV_CODEC_CAP_DELAY capability set.\n *                  If AV_CODEC_CAP_VARIABLE_FRAME_SIZE is set, then each frame\n *                  can have any number of samples.\n *                  If it is not set, frame->nb_samples must be equal to\n *                  avctx->frame_size for all frames except the last.\n *                  The final frame may be smaller than avctx->frame_size.\n * @param[out] got_packet_ptr This field is set to 1 by libavcodec if the\n *                            output packet is non-empty, and to 0 if it is\n *                            empty. If the function returns an error, the\n *                            packet can be assumed to be invalid, and the\n *                            value of got_packet_ptr is undefined and should\n *                            not be used.\n * @return          0 on success, negative error code on failure\n *\n * @deprecated use avcodec_send_frame()/avcodec_receive_packet() instead\n */\nattribute_deprecated\nint avcodec_encode_audio2(AVCodecContext *avctx, AVPacket *avpkt,\n                          const AVFrame *frame, int *got_packet_ptr);\n\n/**\n * Encode a frame of video.\n *\n * Takes input raw video data from frame and writes the next output packet, if\n * available, to avpkt. The output packet does not necessarily contain data for\n * the most recent frame, as encoders can delay and reorder input frames\n * internally as needed.\n *\n * @param avctx     codec context\n * @param avpkt     output AVPacket.\n *                  The user can supply an output buffer by setting\n *                  avpkt->data and avpkt->size prior to calling the\n *                  function, but if the size of the user-provided data is not\n *                  large enough, encoding will fail. All other AVPacket fields\n *                  will be reset by the encoder using av_init_packet(). If\n *                  avpkt->data is NULL, the encoder will allocate it.\n *                  The encoder will set avpkt->size to the size of the\n *                  output packet. The returned data (if any) belongs to the\n *                  caller, he is responsible for freeing it.\n *\n *                  If this function fails or produces no output, avpkt will be\n *                  freed using av_packet_unref().\n * @param[in] frame AVFrame containing the raw video data to be encoded.\n *                  May be NULL when flushing an encoder that has the\n *                  AV_CODEC_CAP_DELAY capability set.\n * @param[out] got_packet_ptr This field is set to 1 by libavcodec if the\n *                            output packet is non-empty, and to 0 if it is\n *                            empty. If the function returns an error, the\n *                            packet can be assumed to be invalid, and the\n *                            value of got_packet_ptr is undefined and should\n *                            not be used.\n * @return          0 on success, negative error code on failure\n *\n * @deprecated use avcodec_send_frame()/avcodec_receive_packet() instead\n */\nattribute_deprecated\nint avcodec_encode_video2(AVCodecContext *avctx, AVPacket *avpkt,\n                          const AVFrame *frame, int *got_packet_ptr);\n\nint avcodec_encode_subtitle(AVCodecContext *avctx, uint8_t *buf, int buf_size,\n                            const AVSubtitle *sub);\n\n\n/**\n * @}\n */\n\n#if FF_API_AVPICTURE\n/**\n * @addtogroup lavc_picture\n * @{\n */\n\n/**\n * @deprecated unused\n */\nattribute_deprecated\nint avpicture_alloc(AVPicture *picture, enum AVPixelFormat pix_fmt, int width, int height);\n\n/**\n * @deprecated unused\n */\nattribute_deprecated\nvoid avpicture_free(AVPicture *picture);\n\n/**\n * @deprecated use av_image_fill_arrays() instead.\n */\nattribute_deprecated\nint avpicture_fill(AVPicture *picture, const uint8_t *ptr,\n                   enum AVPixelFormat pix_fmt, int width, int height);\n\n/**\n * @deprecated use av_image_copy_to_buffer() instead.\n */\nattribute_deprecated\nint avpicture_layout(const AVPicture *src, enum AVPixelFormat pix_fmt,\n                     int width, int height,\n                     unsigned char *dest, int dest_size);\n\n/**\n * @deprecated use av_image_get_buffer_size() instead.\n */\nattribute_deprecated\nint avpicture_get_size(enum AVPixelFormat pix_fmt, int width, int height);\n\n/**\n * @deprecated av_image_copy() instead.\n */\nattribute_deprecated\nvoid av_picture_copy(AVPicture *dst, const AVPicture *src,\n                     enum AVPixelFormat pix_fmt, int width, int height);\n\n/**\n * @deprecated unused\n */\nattribute_deprecated\nint av_picture_crop(AVPicture *dst, const AVPicture *src,\n                    enum AVPixelFormat pix_fmt, int top_band, int left_band);\n\n/**\n * @deprecated unused\n */\nattribute_deprecated\nint av_picture_pad(AVPicture *dst, const AVPicture *src, int height, int width, enum AVPixelFormat pix_fmt,\n            int padtop, int padbottom, int padleft, int padright, int *color);\n\n/**\n * @}\n */\n#endif\n\n/**\n * @defgroup lavc_misc Utility functions\n * @ingroup libavc\n *\n * Miscellaneous utility functions related to both encoding and decoding\n * (or neither).\n * @{\n */\n\n/**\n * @defgroup lavc_misc_pixfmt Pixel formats\n *\n * Functions for working with pixel formats.\n * @{\n */\n\n#if FF_API_GETCHROMA\n/**\n * @deprecated Use av_pix_fmt_get_chroma_sub_sample\n */\n\nattribute_deprecated\nvoid avcodec_get_chroma_sub_sample(enum AVPixelFormat pix_fmt, int *h_shift, int *v_shift);\n#endif\n\n/**\n * Return a value representing the fourCC code associated to the\n * pixel format pix_fmt, or 0 if no associated fourCC code can be\n * found.\n */\nunsigned int avcodec_pix_fmt_to_codec_tag(enum AVPixelFormat pix_fmt);\n\n/**\n * @deprecated see av_get_pix_fmt_loss()\n */\nint avcodec_get_pix_fmt_loss(enum AVPixelFormat dst_pix_fmt, enum AVPixelFormat src_pix_fmt,\n                             int has_alpha);\n\n/**\n * Find the best pixel format to convert to given a certain source pixel\n * format.  When converting from one pixel format to another, information loss\n * may occur.  For example, when converting from RGB24 to GRAY, the color\n * information will be lost. Similarly, other losses occur when converting from\n * some formats to other formats. avcodec_find_best_pix_fmt_of_2() searches which of\n * the given pixel formats should be used to suffer the least amount of loss.\n * The pixel formats from which it chooses one, are determined by the\n * pix_fmt_list parameter.\n *\n *\n * @param[in] pix_fmt_list AV_PIX_FMT_NONE terminated array of pixel formats to choose from\n * @param[in] src_pix_fmt source pixel format\n * @param[in] has_alpha Whether the source pixel format alpha channel is used.\n * @param[out] loss_ptr Combination of flags informing you what kind of losses will occur.\n * @return The best pixel format to convert to or -1 if none was found.\n */\nenum AVPixelFormat avcodec_find_best_pix_fmt_of_list(const enum AVPixelFormat *pix_fmt_list,\n                                            enum AVPixelFormat src_pix_fmt,\n                                            int has_alpha, int *loss_ptr);\n\n/**\n * @deprecated see av_find_best_pix_fmt_of_2()\n */\nenum AVPixelFormat avcodec_find_best_pix_fmt_of_2(enum AVPixelFormat dst_pix_fmt1, enum AVPixelFormat dst_pix_fmt2,\n                                            enum AVPixelFormat src_pix_fmt, int has_alpha, int *loss_ptr);\n\nattribute_deprecated\nenum AVPixelFormat avcodec_find_best_pix_fmt2(enum AVPixelFormat dst_pix_fmt1, enum AVPixelFormat dst_pix_fmt2,\n                                            enum AVPixelFormat src_pix_fmt, int has_alpha, int *loss_ptr);\n\nenum AVPixelFormat avcodec_default_get_format(struct AVCodecContext *s, const enum AVPixelFormat * fmt);\n\n/**\n * @}\n */\n\n#if FF_API_TAG_STRING\n/**\n * Put a string representing the codec tag codec_tag in buf.\n *\n * @param buf       buffer to place codec tag in\n * @param buf_size size in bytes of buf\n * @param codec_tag codec tag to assign\n * @return the length of the string that would have been generated if\n * enough space had been available, excluding the trailing null\n *\n * @deprecated see av_fourcc_make_string() and av_fourcc2str().\n */\nattribute_deprecated\nsize_t av_get_codec_tag_string(char *buf, size_t buf_size, unsigned int codec_tag);\n#endif\n\nvoid avcodec_string(char *buf, int buf_size, AVCodecContext *enc, int encode);\n\n/**\n * Return a name for the specified profile, if available.\n *\n * @param codec the codec that is searched for the given profile\n * @param profile the profile value for which a name is requested\n * @return A name for the profile if found, NULL otherwise.\n */\nconst char *av_get_profile_name(const AVCodec *codec, int profile);\n\n/**\n * Return a name for the specified profile, if available.\n *\n * @param codec_id the ID of the codec to which the requested profile belongs\n * @param profile the profile value for which a name is requested\n * @return A name for the profile if found, NULL otherwise.\n *\n * @note unlike av_get_profile_name(), which searches a list of profiles\n *       supported by a specific decoder or encoder implementation, this\n *       function searches the list of profiles from the AVCodecDescriptor\n */\nconst char *avcodec_profile_name(enum AVCodecID codec_id, int profile);\n\nint avcodec_default_execute(AVCodecContext *c, int (*func)(AVCodecContext *c2, void *arg2),void *arg, int *ret, int count, int size);\nint avcodec_default_execute2(AVCodecContext *c, int (*func)(AVCodecContext *c2, void *arg2, int, int),void *arg, int *ret, int count);\n//FIXME func typedef\n\n/**\n * Fill AVFrame audio data and linesize pointers.\n *\n * The buffer buf must be a preallocated buffer with a size big enough\n * to contain the specified samples amount. The filled AVFrame data\n * pointers will point to this buffer.\n *\n * AVFrame extended_data channel pointers are allocated if necessary for\n * planar audio.\n *\n * @param frame       the AVFrame\n *                    frame->nb_samples must be set prior to calling the\n *                    function. This function fills in frame->data,\n *                    frame->extended_data, frame->linesize[0].\n * @param nb_channels channel count\n * @param sample_fmt  sample format\n * @param buf         buffer to use for frame data\n * @param buf_size    size of buffer\n * @param align       plane size sample alignment (0 = default)\n * @return            >=0 on success, negative error code on failure\n * @todo return the size in bytes required to store the samples in\n * case of success, at the next libavutil bump\n */\nint avcodec_fill_audio_frame(AVFrame *frame, int nb_channels,\n                             enum AVSampleFormat sample_fmt, const uint8_t *buf,\n                             int buf_size, int align);\n\n/**\n * Reset the internal decoder state / flush internal buffers. Should be called\n * e.g. when seeking or when switching to a different stream.\n *\n * @note when refcounted frames are not used (i.e. avctx->refcounted_frames is 0),\n * this invalidates the frames previously returned from the decoder. When\n * refcounted frames are used, the decoder just releases any references it might\n * keep internally, but the caller's reference remains valid.\n */\nvoid avcodec_flush_buffers(AVCodecContext *avctx);\n\n/**\n * Return codec bits per sample.\n *\n * @param[in] codec_id the codec\n * @return Number of bits per sample or zero if unknown for the given codec.\n */\nint av_get_bits_per_sample(enum AVCodecID codec_id);\n\n/**\n * Return the PCM codec associated with a sample format.\n * @param be  endianness, 0 for little, 1 for big,\n *            -1 (or anything else) for native\n * @return  AV_CODEC_ID_PCM_* or AV_CODEC_ID_NONE\n */\nenum AVCodecID av_get_pcm_codec(enum AVSampleFormat fmt, int be);\n\n/**\n * Return codec bits per sample.\n * Only return non-zero if the bits per sample is exactly correct, not an\n * approximation.\n *\n * @param[in] codec_id the codec\n * @return Number of bits per sample or zero if unknown for the given codec.\n */\nint av_get_exact_bits_per_sample(enum AVCodecID codec_id);\n\n/**\n * Return audio frame duration.\n *\n * @param avctx        codec context\n * @param frame_bytes  size of the frame, or 0 if unknown\n * @return             frame duration, in samples, if known. 0 if not able to\n *                     determine.\n */\nint av_get_audio_frame_duration(AVCodecContext *avctx, int frame_bytes);\n\n/**\n * This function is the same as av_get_audio_frame_duration(), except it works\n * with AVCodecParameters instead of an AVCodecContext.\n */\nint av_get_audio_frame_duration2(AVCodecParameters *par, int frame_bytes);\n\n#if FF_API_OLD_BSF\ntypedef struct AVBitStreamFilterContext {\n    void *priv_data;\n    const struct AVBitStreamFilter *filter;\n    AVCodecParserContext *parser;\n    struct AVBitStreamFilterContext *next;\n    /**\n     * Internal default arguments, used if NULL is passed to av_bitstream_filter_filter().\n     * Not for access by library users.\n     */\n    char *args;\n} AVBitStreamFilterContext;\n#endif\n\ntypedef struct AVBSFInternal AVBSFInternal;\n\n/**\n * The bitstream filter state.\n *\n * This struct must be allocated with av_bsf_alloc() and freed with\n * av_bsf_free().\n *\n * The fields in the struct will only be changed (by the caller or by the\n * filter) as described in their documentation, and are to be considered\n * immutable otherwise.\n */\ntypedef struct AVBSFContext {\n    /**\n     * A class for logging and AVOptions\n     */\n    const AVClass *av_class;\n\n    /**\n     * The bitstream filter this context is an instance of.\n     */\n    const struct AVBitStreamFilter *filter;\n\n    /**\n     * Opaque libavcodec internal data. Must not be touched by the caller in any\n     * way.\n     */\n    AVBSFInternal *internal;\n\n    /**\n     * Opaque filter-specific private data. If filter->priv_class is non-NULL,\n     * this is an AVOptions-enabled struct.\n     */\n    void *priv_data;\n\n    /**\n     * Parameters of the input stream. This field is allocated in\n     * av_bsf_alloc(), it needs to be filled by the caller before\n     * av_bsf_init().\n     */\n    AVCodecParameters *par_in;\n\n    /**\n     * Parameters of the output stream. This field is allocated in\n     * av_bsf_alloc(), it is set by the filter in av_bsf_init().\n     */\n    AVCodecParameters *par_out;\n\n    /**\n     * The timebase used for the timestamps of the input packets. Set by the\n     * caller before av_bsf_init().\n     */\n    AVRational time_base_in;\n\n    /**\n     * The timebase used for the timestamps of the output packets. Set by the\n     * filter in av_bsf_init().\n     */\n    AVRational time_base_out;\n} AVBSFContext;\n\ntypedef struct AVBitStreamFilter {\n    const char *name;\n\n    /**\n     * A list of codec ids supported by the filter, terminated by\n     * AV_CODEC_ID_NONE.\n     * May be NULL, in that case the bitstream filter works with any codec id.\n     */\n    const enum AVCodecID *codec_ids;\n\n    /**\n     * A class for the private data, used to declare bitstream filter private\n     * AVOptions. This field is NULL for bitstream filters that do not declare\n     * any options.\n     *\n     * If this field is non-NULL, the first member of the filter private data\n     * must be a pointer to AVClass, which will be set by libavcodec generic\n     * code to this class.\n     */\n    const AVClass *priv_class;\n\n    /*****************************************************************\n     * No fields below this line are part of the public API. They\n     * may not be used outside of libavcodec and can be changed and\n     * removed at will.\n     * New public fields should be added right above.\n     *****************************************************************\n     */\n\n    int priv_data_size;\n    int (*init)(AVBSFContext *ctx);\n    int (*filter)(AVBSFContext *ctx, AVPacket *pkt);\n    void (*close)(AVBSFContext *ctx);\n} AVBitStreamFilter;\n\n#if FF_API_OLD_BSF\n/**\n * @deprecated the old bitstream filtering API (using AVBitStreamFilterContext)\n * is deprecated. Use the new bitstream filtering API (using AVBSFContext).\n */\nattribute_deprecated\nvoid av_register_bitstream_filter(AVBitStreamFilter *bsf);\n/**\n * @deprecated the old bitstream filtering API (using AVBitStreamFilterContext)\n * is deprecated. Use av_bsf_get_by_name(), av_bsf_alloc(), and av_bsf_init()\n * from the new bitstream filtering API (using AVBSFContext).\n */\nattribute_deprecated\nAVBitStreamFilterContext *av_bitstream_filter_init(const char *name);\n/**\n * @deprecated the old bitstream filtering API (using AVBitStreamFilterContext)\n * is deprecated. Use av_bsf_send_packet() and av_bsf_receive_packet() from the\n * new bitstream filtering API (using AVBSFContext).\n */\nattribute_deprecated\nint av_bitstream_filter_filter(AVBitStreamFilterContext *bsfc,\n                               AVCodecContext *avctx, const char *args,\n                               uint8_t **poutbuf, int *poutbuf_size,\n                               const uint8_t *buf, int buf_size, int keyframe);\n/**\n * @deprecated the old bitstream filtering API (using AVBitStreamFilterContext)\n * is deprecated. Use av_bsf_free() from the new bitstream filtering API (using\n * AVBSFContext).\n */\nattribute_deprecated\nvoid av_bitstream_filter_close(AVBitStreamFilterContext *bsf);\n/**\n * @deprecated the old bitstream filtering API (using AVBitStreamFilterContext)\n * is deprecated. Use av_bsf_iterate() from the new bitstream filtering API (using\n * AVBSFContext).\n */\nattribute_deprecated\nconst AVBitStreamFilter *av_bitstream_filter_next(const AVBitStreamFilter *f);\n#endif\n\n/**\n * @return a bitstream filter with the specified name or NULL if no such\n *         bitstream filter exists.\n */\nconst AVBitStreamFilter *av_bsf_get_by_name(const char *name);\n\n/**\n * Iterate over all registered bitstream filters.\n *\n * @param opaque a pointer where libavcodec will store the iteration state. Must\n *               point to NULL to start the iteration.\n *\n * @return the next registered bitstream filter or NULL when the iteration is\n *         finished\n */\nconst AVBitStreamFilter *av_bsf_iterate(void **opaque);\n#if FF_API_NEXT\nattribute_deprecated\nconst AVBitStreamFilter *av_bsf_next(void **opaque);\n#endif\n\n/**\n * Allocate a context for a given bitstream filter. The caller must fill in the\n * context parameters as described in the documentation and then call\n * av_bsf_init() before sending any data to the filter.\n *\n * @param filter the filter for which to allocate an instance.\n * @param ctx a pointer into which the pointer to the newly-allocated context\n *            will be written. It must be freed with av_bsf_free() after the\n *            filtering is done.\n *\n * @return 0 on success, a negative AVERROR code on failure\n */\nint av_bsf_alloc(const AVBitStreamFilter *filter, AVBSFContext **ctx);\n\n/**\n * Prepare the filter for use, after all the parameters and options have been\n * set.\n */\nint av_bsf_init(AVBSFContext *ctx);\n\n/**\n * Submit a packet for filtering.\n *\n * After sending each packet, the filter must be completely drained by calling\n * av_bsf_receive_packet() repeatedly until it returns AVERROR(EAGAIN) or\n * AVERROR_EOF.\n *\n * @param pkt the packet to filter. The bitstream filter will take ownership of\n * the packet and reset the contents of pkt. pkt is not touched if an error occurs.\n * This parameter may be NULL, which signals the end of the stream (i.e. no more\n * packets will be sent). That will cause the filter to output any packets it\n * may have buffered internally.\n *\n * @return 0 on success, a negative AVERROR on error.\n */\nint av_bsf_send_packet(AVBSFContext *ctx, AVPacket *pkt);\n\n/**\n * Retrieve a filtered packet.\n *\n * @param[out] pkt this struct will be filled with the contents of the filtered\n *                 packet. It is owned by the caller and must be freed using\n *                 av_packet_unref() when it is no longer needed.\n *                 This parameter should be \"clean\" (i.e. freshly allocated\n *                 with av_packet_alloc() or unreffed with av_packet_unref())\n *                 when this function is called. If this function returns\n *                 successfully, the contents of pkt will be completely\n *                 overwritten by the returned data. On failure, pkt is not\n *                 touched.\n *\n * @return 0 on success. AVERROR(EAGAIN) if more packets need to be sent to the\n * filter (using av_bsf_send_packet()) to get more output. AVERROR_EOF if there\n * will be no further output from the filter. Another negative AVERROR value if\n * an error occurs.\n *\n * @note one input packet may result in several output packets, so after sending\n * a packet with av_bsf_send_packet(), this function needs to be called\n * repeatedly until it stops returning 0. It is also possible for a filter to\n * output fewer packets than were sent to it, so this function may return\n * AVERROR(EAGAIN) immediately after a successful av_bsf_send_packet() call.\n */\nint av_bsf_receive_packet(AVBSFContext *ctx, AVPacket *pkt);\n\n/**\n * Free a bitstream filter context and everything associated with it; write NULL\n * into the supplied pointer.\n */\nvoid av_bsf_free(AVBSFContext **ctx);\n\n/**\n * Get the AVClass for AVBSFContext. It can be used in combination with\n * AV_OPT_SEARCH_FAKE_OBJ for examining options.\n *\n * @see av_opt_find().\n */\nconst AVClass *av_bsf_get_class(void);\n\n/**\n * Structure for chain/list of bitstream filters.\n * Empty list can be allocated by av_bsf_list_alloc().\n */\ntypedef struct AVBSFList AVBSFList;\n\n/**\n * Allocate empty list of bitstream filters.\n * The list must be later freed by av_bsf_list_free()\n * or finalized by av_bsf_list_finalize().\n *\n * @return Pointer to @ref AVBSFList on success, NULL in case of failure\n */\nAVBSFList *av_bsf_list_alloc(void);\n\n/**\n * Free list of bitstream filters.\n *\n * @param lst Pointer to pointer returned by av_bsf_list_alloc()\n */\nvoid av_bsf_list_free(AVBSFList **lst);\n\n/**\n * Append bitstream filter to the list of bitstream filters.\n *\n * @param lst List to append to\n * @param bsf Filter context to be appended\n *\n * @return >=0 on success, negative AVERROR in case of failure\n */\nint av_bsf_list_append(AVBSFList *lst, AVBSFContext *bsf);\n\n/**\n * Construct new bitstream filter context given it's name and options\n * and append it to the list of bitstream filters.\n *\n * @param lst      List to append to\n * @param bsf_name Name of the bitstream filter\n * @param options  Options for the bitstream filter, can be set to NULL\n *\n * @return >=0 on success, negative AVERROR in case of failure\n */\nint av_bsf_list_append2(AVBSFList *lst, const char * bsf_name, AVDictionary **options);\n/**\n * Finalize list of bitstream filters.\n *\n * This function will transform @ref AVBSFList to single @ref AVBSFContext,\n * so the whole chain of bitstream filters can be treated as single filter\n * freshly allocated by av_bsf_alloc().\n * If the call is successful, @ref AVBSFList structure is freed and lst\n * will be set to NULL. In case of failure, caller is responsible for\n * freeing the structure by av_bsf_list_free()\n *\n * @param      lst Filter list structure to be transformed\n * @param[out] bsf Pointer to be set to newly created @ref AVBSFContext structure\n *                 representing the chain of bitstream filters\n *\n * @return >=0 on success, negative AVERROR in case of failure\n */\nint av_bsf_list_finalize(AVBSFList **lst, AVBSFContext **bsf);\n\n/**\n * Parse string describing list of bitstream filters and create single\n * @ref AVBSFContext describing the whole chain of bitstream filters.\n * Resulting @ref AVBSFContext can be treated as any other @ref AVBSFContext freshly\n * allocated by av_bsf_alloc().\n *\n * @param      str String describing chain of bitstream filters in format\n *                 `bsf1[=opt1=val1:opt2=val2][,bsf2]`\n * @param[out] bsf Pointer to be set to newly created @ref AVBSFContext structure\n *                 representing the chain of bitstream filters\n *\n * @return >=0 on success, negative AVERROR in case of failure\n */\nint av_bsf_list_parse_str(const char *str, AVBSFContext **bsf);\n\n/**\n * Get null/pass-through bitstream filter.\n *\n * @param[out] bsf Pointer to be set to new instance of pass-through bitstream filter\n *\n * @return\n */\nint av_bsf_get_null_filter(AVBSFContext **bsf);\n\n/* memory */\n\n/**\n * Same behaviour av_fast_malloc but the buffer has additional\n * AV_INPUT_BUFFER_PADDING_SIZE at the end which will always be 0.\n *\n * In addition the whole buffer will initially and after resizes\n * be 0-initialized so that no uninitialized data will ever appear.\n */\nvoid av_fast_padded_malloc(void *ptr, unsigned int *size, size_t min_size);\n\n/**\n * Same behaviour av_fast_padded_malloc except that buffer will always\n * be 0-initialized after call.\n */\nvoid av_fast_padded_mallocz(void *ptr, unsigned int *size, size_t min_size);\n\n/**\n * Encode extradata length to a buffer. Used by xiph codecs.\n *\n * @param s buffer to write to; must be at least (v/255+1) bytes long\n * @param v size of extradata in bytes\n * @return number of bytes written to the buffer.\n */\nunsigned int av_xiphlacing(unsigned char *s, unsigned int v);\n\n#if FF_API_USER_VISIBLE_AVHWACCEL\n/**\n * Register the hardware accelerator hwaccel.\n *\n * @deprecated  This function doesn't do anything.\n */\nattribute_deprecated\nvoid av_register_hwaccel(AVHWAccel *hwaccel);\n\n/**\n * If hwaccel is NULL, returns the first registered hardware accelerator,\n * if hwaccel is non-NULL, returns the next registered hardware accelerator\n * after hwaccel, or NULL if hwaccel is the last one.\n *\n * @deprecated  AVHWaccel structures contain no user-serviceable parts, so\n *              this function should not be used.\n */\nattribute_deprecated\nAVHWAccel *av_hwaccel_next(const AVHWAccel *hwaccel);\n#endif\n\n#if FF_API_LOCKMGR\n/**\n * Lock operation used by lockmgr\n *\n * @deprecated Deprecated together with av_lockmgr_register().\n */\nenum AVLockOp {\n  AV_LOCK_CREATE,  ///< Create a mutex\n  AV_LOCK_OBTAIN,  ///< Lock the mutex\n  AV_LOCK_RELEASE, ///< Unlock the mutex\n  AV_LOCK_DESTROY, ///< Free mutex resources\n};\n\n/**\n * Register a user provided lock manager supporting the operations\n * specified by AVLockOp. The \"mutex\" argument to the function points\n * to a (void *) where the lockmgr should store/get a pointer to a user\n * allocated mutex. It is NULL upon AV_LOCK_CREATE and equal to the\n * value left by the last call for all other ops. If the lock manager is\n * unable to perform the op then it should leave the mutex in the same\n * state as when it was called and return a non-zero value. However,\n * when called with AV_LOCK_DESTROY the mutex will always be assumed to\n * have been successfully destroyed. If av_lockmgr_register succeeds\n * it will return a non-negative value, if it fails it will return a\n * negative value and destroy all mutex and unregister all callbacks.\n * av_lockmgr_register is not thread-safe, it must be called from a\n * single thread before any calls which make use of locking are used.\n *\n * @param cb User defined callback. av_lockmgr_register invokes calls\n *           to this callback and the previously registered callback.\n *           The callback will be used to create more than one mutex\n *           each of which must be backed by its own underlying locking\n *           mechanism (i.e. do not use a single static object to\n *           implement your lock manager). If cb is set to NULL the\n *           lockmgr will be unregistered.\n *\n * @deprecated This function does nothing, and always returns 0. Be sure to\n *             build with thread support to get basic thread safety.\n */\nattribute_deprecated\nint av_lockmgr_register(int (*cb)(void **mutex, enum AVLockOp op));\n#endif\n\n/**\n * Get the type of the given codec.\n */\nenum AVMediaType avcodec_get_type(enum AVCodecID codec_id);\n\n/**\n * Get the name of a codec.\n * @return  a static string identifying the codec; never NULL\n */\nconst char *avcodec_get_name(enum AVCodecID id);\n\n/**\n * @return a positive value if s is open (i.e. avcodec_open2() was called on it\n * with no corresponding avcodec_close()), 0 otherwise.\n */\nint avcodec_is_open(AVCodecContext *s);\n\n/**\n * @return a non-zero number if codec is an encoder, zero otherwise\n */\nint av_codec_is_encoder(const AVCodec *codec);\n\n/**\n * @return a non-zero number if codec is a decoder, zero otherwise\n */\nint av_codec_is_decoder(const AVCodec *codec);\n\n/**\n * @return descriptor for given codec ID or NULL if no descriptor exists.\n */\nconst AVCodecDescriptor *avcodec_descriptor_get(enum AVCodecID id);\n\n/**\n * Iterate over all codec descriptors known to libavcodec.\n *\n * @param prev previous descriptor. NULL to get the first descriptor.\n *\n * @return next descriptor or NULL after the last descriptor\n */\nconst AVCodecDescriptor *avcodec_descriptor_next(const AVCodecDescriptor *prev);\n\n/**\n * @return codec descriptor with the given name or NULL if no such descriptor\n *         exists.\n */\nconst AVCodecDescriptor *avcodec_descriptor_get_by_name(const char *name);\n\n/**\n * Allocate a CPB properties structure and initialize its fields to default\n * values.\n *\n * @param size if non-NULL, the size of the allocated struct will be written\n *             here. This is useful for embedding it in side data.\n *\n * @return the newly allocated struct or NULL on failure\n */\nAVCPBProperties *av_cpb_properties_alloc(size_t *size);\n\n/**\n * @}\n */\n\n#endif /* AVCODEC_AVCODEC_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/avdct.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_AVDCT_H\n#define AVCODEC_AVDCT_H\n\n#include \"libavutil/opt.h\"\n\n/**\n * AVDCT context.\n * @note function pointers can be NULL if the specific features have been\n *       disabled at build time.\n */\ntypedef struct AVDCT {\n    const AVClass *av_class;\n\n    void (*idct)(int16_t *block /* align 16 */);\n\n    /**\n     * IDCT input permutation.\n     * Several optimized IDCTs need a permutated input (relative to the\n     * normal order of the reference IDCT).\n     * This permutation must be performed before the idct_put/add.\n     * Note, normally this can be merged with the zigzag/alternate scan<br>\n     * An example to avoid confusion:\n     * - (->decode coeffs -> zigzag reorder -> dequant -> reference IDCT -> ...)\n     * - (x -> reference DCT -> reference IDCT -> x)\n     * - (x -> reference DCT -> simple_mmx_perm = idct_permutation\n     *    -> simple_idct_mmx -> x)\n     * - (-> decode coeffs -> zigzag reorder -> simple_mmx_perm -> dequant\n     *    -> simple_idct_mmx -> ...)\n     */\n    uint8_t idct_permutation[64];\n\n    void (*fdct)(int16_t *block /* align 16 */);\n\n\n    /**\n     * DCT algorithm.\n     * must use AVOptions to set this field.\n     */\n    int dct_algo;\n\n    /**\n     * IDCT algorithm.\n     * must use AVOptions to set this field.\n     */\n    int idct_algo;\n\n    void (*get_pixels)(int16_t *block /* align 16 */,\n                       const uint8_t *pixels /* align 8 */,\n                       ptrdiff_t line_size);\n\n    int bits_per_sample;\n} AVDCT;\n\n/**\n * Allocates a AVDCT context.\n * This needs to be initialized with avcodec_dct_init() after optionally\n * configuring it with AVOptions.\n *\n * To free it use av_free()\n */\nAVDCT *avcodec_dct_alloc(void);\nint avcodec_dct_init(AVDCT *);\n\nconst AVClass *avcodec_dct_get_class(void);\n\n#endif /* AVCODEC_AVDCT_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/avfft.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_AVFFT_H\n#define AVCODEC_AVFFT_H\n\n/**\n * @file\n * @ingroup lavc_fft\n * FFT functions\n */\n\n/**\n * @defgroup lavc_fft FFT functions\n * @ingroup lavc_misc\n *\n * @{\n */\n\ntypedef float FFTSample;\n\ntypedef struct FFTComplex {\n    FFTSample re, im;\n} FFTComplex;\n\ntypedef struct FFTContext FFTContext;\n\n/**\n * Set up a complex FFT.\n * @param nbits           log2 of the length of the input array\n * @param inverse         if 0 perform the forward transform, if 1 perform the inverse\n */\nFFTContext *av_fft_init(int nbits, int inverse);\n\n/**\n * Do the permutation needed BEFORE calling ff_fft_calc().\n */\nvoid av_fft_permute(FFTContext *s, FFTComplex *z);\n\n/**\n * Do a complex FFT with the parameters defined in av_fft_init(). The\n * input data must be permuted before. No 1.0/sqrt(n) normalization is done.\n */\nvoid av_fft_calc(FFTContext *s, FFTComplex *z);\n\nvoid av_fft_end(FFTContext *s);\n\nFFTContext *av_mdct_init(int nbits, int inverse, double scale);\nvoid av_imdct_calc(FFTContext *s, FFTSample *output, const FFTSample *input);\nvoid av_imdct_half(FFTContext *s, FFTSample *output, const FFTSample *input);\nvoid av_mdct_calc(FFTContext *s, FFTSample *output, const FFTSample *input);\nvoid av_mdct_end(FFTContext *s);\n\n/* Real Discrete Fourier Transform */\n\nenum RDFTransformType {\n    DFT_R2C,\n    IDFT_C2R,\n    IDFT_R2C,\n    DFT_C2R,\n};\n\ntypedef struct RDFTContext RDFTContext;\n\n/**\n * Set up a real FFT.\n * @param nbits           log2 of the length of the input array\n * @param trans           the type of transform\n */\nRDFTContext *av_rdft_init(int nbits, enum RDFTransformType trans);\nvoid av_rdft_calc(RDFTContext *s, FFTSample *data);\nvoid av_rdft_end(RDFTContext *s);\n\n/* Discrete Cosine Transform */\n\ntypedef struct DCTContext DCTContext;\n\nenum DCTTransformType {\n    DCT_II = 0,\n    DCT_III,\n    DCT_I,\n    DST_I,\n};\n\n/**\n * Set up DCT.\n *\n * @param nbits           size of the input array:\n *                        (1 << nbits)     for DCT-II, DCT-III and DST-I\n *                        (1 << nbits) + 1 for DCT-I\n * @param type            the type of transform\n *\n * @note the first element of the input of DST-I is ignored\n */\nDCTContext *av_dct_init(int nbits, enum DCTTransformType type);\nvoid av_dct_calc(DCTContext *s, FFTSample *data);\nvoid av_dct_end (DCTContext *s);\n\n/**\n * @}\n */\n\n#endif /* AVCODEC_AVFFT_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/d3d11va.h",
    "content": "/*\n * Direct3D11 HW acceleration\n *\n * copyright (c) 2009 Laurent Aimar\n * copyright (c) 2015 Steve Lhomme\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_D3D11VA_H\n#define AVCODEC_D3D11VA_H\n\n/**\n * @file\n * @ingroup lavc_codec_hwaccel_d3d11va\n * Public libavcodec D3D11VA header.\n */\n\n#if !defined(_WIN32_WINNT) || _WIN32_WINNT < 0x0602\n#undef _WIN32_WINNT\n#define _WIN32_WINNT 0x0602\n#endif\n\n#include <stdint.h>\n#include <d3d11.h>\n\n/**\n * @defgroup lavc_codec_hwaccel_d3d11va Direct3D11\n * @ingroup lavc_codec_hwaccel\n *\n * @{\n */\n\n#define FF_DXVA2_WORKAROUND_SCALING_LIST_ZIGZAG 1 ///< Work around for Direct3D11 and old UVD/UVD+ ATI video cards\n#define FF_DXVA2_WORKAROUND_INTEL_CLEARVIDEO    2 ///< Work around for Direct3D11 and old Intel GPUs with ClearVideo interface\n\n/**\n * This structure is used to provides the necessary configurations and data\n * to the Direct3D11 FFmpeg HWAccel implementation.\n *\n * The application must make it available as AVCodecContext.hwaccel_context.\n *\n * Use av_d3d11va_alloc_context() exclusively to allocate an AVD3D11VAContext.\n */\ntypedef struct AVD3D11VAContext {\n    /**\n     * D3D11 decoder object\n     */\n    ID3D11VideoDecoder *decoder;\n\n    /**\n      * D3D11 VideoContext\n      */\n    ID3D11VideoContext *video_context;\n\n    /**\n     * D3D11 configuration used to create the decoder\n     */\n    D3D11_VIDEO_DECODER_CONFIG *cfg;\n\n    /**\n     * The number of surface in the surface array\n     */\n    unsigned surface_count;\n\n    /**\n     * The array of Direct3D surfaces used to create the decoder\n     */\n    ID3D11VideoDecoderOutputView **surface;\n\n    /**\n     * A bit field configuring the workarounds needed for using the decoder\n     */\n    uint64_t workaround;\n\n    /**\n     * Private to the FFmpeg AVHWAccel implementation\n     */\n    unsigned report_id;\n\n    /**\n      * Mutex to access video_context\n      */\n    HANDLE  context_mutex;\n} AVD3D11VAContext;\n\n/**\n * Allocate an AVD3D11VAContext.\n *\n * @return Newly-allocated AVD3D11VAContext or NULL on failure.\n */\nAVD3D11VAContext *av_d3d11va_alloc_context(void);\n\n/**\n * @}\n */\n\n#endif /* AVCODEC_D3D11VA_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/dirac.h",
    "content": "/*\n * Copyright (C) 2007 Marco Gerards <marco@gnu.org>\n * Copyright (C) 2009 David Conrad\n * Copyright (C) 2011 Jordi Ortiz\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_DIRAC_H\n#define AVCODEC_DIRAC_H\n\n/**\n * @file\n * Interface to Dirac Decoder/Encoder\n * @author Marco Gerards <marco@gnu.org>\n * @author David Conrad\n * @author Jordi Ortiz\n */\n\n#include \"avcodec.h\"\n\n/**\n * The spec limits the number of wavelet decompositions to 4 for both\n * level 1 (VC-2) and 128 (long-gop default).\n * 5 decompositions is the maximum before >16-bit buffers are needed.\n * Schroedinger allows this for DD 9,7 and 13,7 wavelets only, limiting\n * the others to 4 decompositions (or 3 for the fidelity filter).\n *\n * We use this instead of MAX_DECOMPOSITIONS to save some memory.\n */\n#define MAX_DWT_LEVELS 5\n\n/**\n * Parse code values:\n *\n * Dirac Specification ->\n * 9.6.1  Table 9.1\n *\n * VC-2 Specification  ->\n * 10.4.1 Table 10.1\n */\n\nenum DiracParseCodes {\n    DIRAC_PCODE_SEQ_HEADER      = 0x00,\n    DIRAC_PCODE_END_SEQ         = 0x10,\n    DIRAC_PCODE_AUX             = 0x20,\n    DIRAC_PCODE_PAD             = 0x30,\n    DIRAC_PCODE_PICTURE_CODED   = 0x08,\n    DIRAC_PCODE_PICTURE_RAW     = 0x48,\n    DIRAC_PCODE_PICTURE_LOW_DEL = 0xC8,\n    DIRAC_PCODE_PICTURE_HQ      = 0xE8,\n    DIRAC_PCODE_INTER_NOREF_CO1 = 0x0A,\n    DIRAC_PCODE_INTER_NOREF_CO2 = 0x09,\n    DIRAC_PCODE_INTER_REF_CO1   = 0x0D,\n    DIRAC_PCODE_INTER_REF_CO2   = 0x0E,\n    DIRAC_PCODE_INTRA_REF_CO    = 0x0C,\n    DIRAC_PCODE_INTRA_REF_RAW   = 0x4C,\n    DIRAC_PCODE_INTRA_REF_PICT  = 0xCC,\n    DIRAC_PCODE_MAGIC           = 0x42424344,\n};\n\ntypedef struct DiracVersionInfo {\n    int major;\n    int minor;\n} DiracVersionInfo;\n\ntypedef struct AVDiracSeqHeader {\n    unsigned width;\n    unsigned height;\n    uint8_t chroma_format;          ///< 0: 444  1: 422  2: 420\n\n    uint8_t interlaced;\n    uint8_t top_field_first;\n\n    uint8_t frame_rate_index;       ///< index into dirac_frame_rate[]\n    uint8_t aspect_ratio_index;     ///< index into dirac_aspect_ratio[]\n\n    uint16_t clean_width;\n    uint16_t clean_height;\n    uint16_t clean_left_offset;\n    uint16_t clean_right_offset;\n\n    uint8_t pixel_range_index;      ///< index into dirac_pixel_range_presets[]\n    uint8_t color_spec_index;       ///< index into dirac_color_spec_presets[]\n\n    int profile;\n    int level;\n\n    AVRational framerate;\n    AVRational sample_aspect_ratio;\n\n    enum AVPixelFormat pix_fmt;\n    enum AVColorRange color_range;\n    enum AVColorPrimaries color_primaries;\n    enum AVColorTransferCharacteristic color_trc;\n    enum AVColorSpace colorspace;\n\n    DiracVersionInfo version;\n    int bit_depth;\n} AVDiracSeqHeader;\n\n/**\n * Parse a Dirac sequence header.\n *\n * @param dsh this function will allocate and fill an AVDiracSeqHeader struct\n *            and write it into this pointer. The caller must free it with\n *            av_free().\n * @param buf the data buffer\n * @param buf_size the size of the data buffer in bytes\n * @param log_ctx if non-NULL, this function will log errors here\n * @return 0 on success, a negative AVERROR code on failure\n */\nint av_dirac_parse_sequence_header(AVDiracSeqHeader **dsh,\n                                   const uint8_t *buf, size_t buf_size,\n                                   void *log_ctx);\n\n#endif /* AVCODEC_DIRAC_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/dv_profile.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_DV_PROFILE_H\n#define AVCODEC_DV_PROFILE_H\n\n#include <stdint.h>\n\n#include \"libavutil/pixfmt.h\"\n#include \"libavutil/rational.h\"\n#include \"avcodec.h\"\n\n/* minimum number of bytes to read from a DV stream in order to\n * determine the profile */\n#define DV_PROFILE_BYTES (6 * 80) /* 6 DIF blocks */\n\n\n/*\n * AVDVProfile is used to express the differences between various\n * DV flavors. For now it's primarily used for differentiating\n * 525/60 and 625/50, but the plans are to use it for various\n * DV specs as well (e.g. SMPTE314M vs. IEC 61834).\n */\ntypedef struct AVDVProfile {\n    int              dsf;                   /* value of the dsf in the DV header */\n    int              video_stype;           /* stype for VAUX source pack */\n    int              frame_size;            /* total size of one frame in bytes */\n    int              difseg_size;           /* number of DIF segments per DIF channel */\n    int              n_difchan;             /* number of DIF channels per frame */\n    AVRational       time_base;             /* 1/framerate */\n    int              ltc_divisor;           /* FPS from the LTS standpoint */\n    int              height;                /* picture height in pixels */\n    int              width;                 /* picture width in pixels */\n    AVRational       sar[2];                /* sample aspect ratios for 4:3 and 16:9 */\n    enum AVPixelFormat pix_fmt;             /* picture pixel format */\n    int              bpm;                   /* blocks per macroblock */\n    const uint8_t   *block_sizes;           /* AC block sizes, in bits */\n    int              audio_stride;          /* size of audio_shuffle table */\n    int              audio_min_samples[3];  /* min amount of audio samples */\n                                            /* for 48kHz, 44.1kHz and 32kHz */\n    int              audio_samples_dist[5]; /* how many samples are supposed to be */\n                                            /* in each frame in a 5 frames window */\n    const uint8_t  (*audio_shuffle)[9];     /* PCM shuffling table */\n} AVDVProfile;\n\n/**\n * Get a DV profile for the provided compressed frame.\n *\n * @param sys the profile used for the previous frame, may be NULL\n * @param frame the compressed data buffer\n * @param buf_size size of the buffer in bytes\n * @return the DV profile for the supplied data or NULL on failure\n */\nconst AVDVProfile *av_dv_frame_profile(const AVDVProfile *sys,\n                                       const uint8_t *frame, unsigned buf_size);\n\n/**\n * Get a DV profile for the provided stream parameters.\n */\nconst AVDVProfile *av_dv_codec_profile(int width, int height, enum AVPixelFormat pix_fmt);\n\n/**\n * Get a DV profile for the provided stream parameters.\n * The frame rate is used as a best-effort parameter.\n */\nconst AVDVProfile *av_dv_codec_profile2(int width, int height, enum AVPixelFormat pix_fmt, AVRational frame_rate);\n\n#endif /* AVCODEC_DV_PROFILE_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/dxva2.h",
    "content": "/*\n * DXVA2 HW acceleration\n *\n * copyright (c) 2009 Laurent Aimar\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_DXVA2_H\n#define AVCODEC_DXVA2_H\n\n/**\n * @file\n * @ingroup lavc_codec_hwaccel_dxva2\n * Public libavcodec DXVA2 header.\n */\n\n#if !defined(_WIN32_WINNT) || _WIN32_WINNT < 0x0602\n#undef _WIN32_WINNT\n#define _WIN32_WINNT 0x0602\n#endif\n\n#include <stdint.h>\n#include <d3d9.h>\n#include <dxva2api.h>\n\n/**\n * @defgroup lavc_codec_hwaccel_dxva2 DXVA2\n * @ingroup lavc_codec_hwaccel\n *\n * @{\n */\n\n#define FF_DXVA2_WORKAROUND_SCALING_LIST_ZIGZAG 1 ///< Work around for DXVA2 and old UVD/UVD+ ATI video cards\n#define FF_DXVA2_WORKAROUND_INTEL_CLEARVIDEO    2 ///< Work around for DXVA2 and old Intel GPUs with ClearVideo interface\n\n/**\n * This structure is used to provides the necessary configurations and data\n * to the DXVA2 FFmpeg HWAccel implementation.\n *\n * The application must make it available as AVCodecContext.hwaccel_context.\n */\nstruct dxva_context {\n    /**\n     * DXVA2 decoder object\n     */\n    IDirectXVideoDecoder *decoder;\n\n    /**\n     * DXVA2 configuration used to create the decoder\n     */\n    const DXVA2_ConfigPictureDecode *cfg;\n\n    /**\n     * The number of surface in the surface array\n     */\n    unsigned surface_count;\n\n    /**\n     * The array of Direct3D surfaces used to create the decoder\n     */\n    LPDIRECT3DSURFACE9 *surface;\n\n    /**\n     * A bit field configuring the workarounds needed for using the decoder\n     */\n    uint64_t workaround;\n\n    /**\n     * Private to the FFmpeg AVHWAccel implementation\n     */\n    unsigned report_id;\n};\n\n/**\n * @}\n */\n\n#endif /* AVCODEC_DXVA2_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/jni.h",
    "content": "/*\n * JNI public API functions\n *\n * Copyright (c) 2015-2016 Matthieu Bouron <matthieu.bouron stupeflix.com>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_JNI_H\n#define AVCODEC_JNI_H\n\n/*\n * Manually set a Java virtual machine which will be used to retrieve the JNI\n * environment. Once a Java VM is set it cannot be changed afterwards, meaning\n * you can call multiple times av_jni_set_java_vm with the same Java VM pointer\n * however it will error out if you try to set a different Java VM.\n *\n * @param vm Java virtual machine\n * @param log_ctx context used for logging, can be NULL\n * @return 0 on success, < 0 otherwise\n */\nint av_jni_set_java_vm(void *vm, void *log_ctx);\n\n/*\n * Get the Java virtual machine which has been set with av_jni_set_java_vm.\n *\n * @param vm Java virtual machine\n * @return a pointer to the Java virtual machine\n */\nvoid *av_jni_get_java_vm(void *log_ctx);\n\n#endif /* AVCODEC_JNI_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/mediacodec.h",
    "content": "/*\n * Android MediaCodec public API\n *\n * Copyright (c) 2016 Matthieu Bouron <matthieu.bouron stupeflix.com>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_MEDIACODEC_H\n#define AVCODEC_MEDIACODEC_H\n\n#include \"libavcodec/avcodec.h\"\n\n/**\n * This structure holds a reference to a android/view/Surface object that will\n * be used as output by the decoder.\n *\n */\ntypedef struct AVMediaCodecContext {\n\n    /**\n     * android/view/Surface object reference.\n     */\n    void *surface;\n\n} AVMediaCodecContext;\n\n/**\n * Allocate and initialize a MediaCodec context.\n *\n * When decoding with MediaCodec is finished, the caller must free the\n * MediaCodec context with av_mediacodec_default_free.\n *\n * @return a pointer to a newly allocated AVMediaCodecContext on success, NULL otherwise\n */\nAVMediaCodecContext *av_mediacodec_alloc_context(void);\n\n/**\n * Convenience function that sets up the MediaCodec context.\n *\n * @param avctx codec context\n * @param ctx MediaCodec context to initialize\n * @param surface reference to an android/view/Surface\n * @return 0 on success, < 0 otherwise\n */\nint av_mediacodec_default_init(AVCodecContext *avctx, AVMediaCodecContext *ctx, void *surface);\n\n/**\n * This function must be called to free the MediaCodec context initialized with\n * av_mediacodec_default_init().\n *\n * @param avctx codec context\n */\nvoid av_mediacodec_default_free(AVCodecContext *avctx);\n\n/**\n * Opaque structure representing a MediaCodec buffer to render.\n */\ntypedef struct MediaCodecBuffer AVMediaCodecBuffer;\n\n/**\n * Release a MediaCodec buffer and render it to the surface that is associated\n * with the decoder. This function should only be called once on a given\n * buffer, once released the underlying buffer returns to the codec, thus\n * subsequent calls to this function will have no effect.\n *\n * @param buffer the buffer to render\n * @param render 1 to release and render the buffer to the surface or 0 to\n * discard the buffer\n * @return 0 on success, < 0 otherwise\n */\nint av_mediacodec_release_buffer(AVMediaCodecBuffer *buffer, int render);\n\n#endif /* AVCODEC_MEDIACODEC_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/qsv.h",
    "content": "/*\n * Intel MediaSDK QSV public API\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_QSV_H\n#define AVCODEC_QSV_H\n\n#include <mfx/mfxvideo.h>\n\n#include \"libavutil/buffer.h\"\n\n/**\n * This struct is used for communicating QSV parameters between libavcodec and\n * the caller. It is managed by the caller and must be assigned to\n * AVCodecContext.hwaccel_context.\n * - decoding: hwaccel_context must be set on return from the get_format()\n *             callback\n * - encoding: hwaccel_context must be set before avcodec_open2()\n */\ntypedef struct AVQSVContext {\n    /**\n     * If non-NULL, the session to use for encoding or decoding.\n     * Otherwise, libavcodec will try to create an internal session.\n     */\n    mfxSession session;\n\n    /**\n     * The IO pattern to use.\n     */\n    int iopattern;\n\n    /**\n     * Extra buffers to pass to encoder or decoder initialization.\n     */\n    mfxExtBuffer **ext_buffers;\n    int         nb_ext_buffers;\n\n    /**\n     * Encoding only. If this field is set to non-zero by the caller, libavcodec\n     * will create an mfxExtOpaqueSurfaceAlloc extended buffer and pass it to\n     * the encoder initialization. This only makes sense if iopattern is also\n     * set to MFX_IOPATTERN_IN_OPAQUE_MEMORY.\n     *\n     * The number of allocated opaque surfaces will be the sum of the number\n     * required by the encoder and the user-provided value nb_opaque_surfaces.\n     * The array of the opaque surfaces will be exported to the caller through\n     * the opaque_surfaces field.\n     */\n    int opaque_alloc;\n\n    /**\n     * Encoding only, and only if opaque_alloc is set to non-zero. Before\n     * calling avcodec_open2(), the caller should set this field to the number\n     * of extra opaque surfaces to allocate beyond what is required by the\n     * encoder.\n     *\n     * On return from avcodec_open2(), this field will be set by libavcodec to\n     * the total number of allocated opaque surfaces.\n     */\n    int nb_opaque_surfaces;\n\n    /**\n     * Encoding only, and only if opaque_alloc is set to non-zero. On return\n     * from avcodec_open2(), this field will be used by libavcodec to export the\n     * array of the allocated opaque surfaces to the caller, so they can be\n     * passed to other parts of the pipeline.\n     *\n     * The buffer reference exported here is owned and managed by libavcodec,\n     * the callers should make their own reference with av_buffer_ref() and free\n     * it with av_buffer_unref() when it is no longer needed.\n     *\n     * The buffer data is an nb_opaque_surfaces-sized array of mfxFrameSurface1.\n     */\n    AVBufferRef *opaque_surfaces;\n\n    /**\n     * Encoding only, and only if opaque_alloc is set to non-zero. On return\n     * from avcodec_open2(), this field will be set to the surface type used in\n     * the opaque allocation request.\n     */\n    int opaque_alloc_type;\n} AVQSVContext;\n\n/**\n * Allocate a new context.\n *\n * It must be freed by the caller with av_free().\n */\nAVQSVContext *av_qsv_alloc_context(void);\n\n#endif /* AVCODEC_QSV_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/vaapi.h",
    "content": "/*\n * Video Acceleration API (shared data between FFmpeg and the video player)\n * HW decode acceleration for MPEG-2, MPEG-4, H.264 and VC-1\n *\n * Copyright (C) 2008-2009 Splitted-Desktop Systems\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_VAAPI_H\n#define AVCODEC_VAAPI_H\n\n/**\n * @file\n * @ingroup lavc_codec_hwaccel_vaapi\n * Public libavcodec VA API header.\n */\n\n#include <stdint.h>\n#include \"libavutil/attributes.h\"\n#include \"version.h\"\n\n#if FF_API_STRUCT_VAAPI_CONTEXT\n\n/**\n * @defgroup lavc_codec_hwaccel_vaapi VA API Decoding\n * @ingroup lavc_codec_hwaccel\n * @{\n */\n\n/**\n * This structure is used to share data between the FFmpeg library and\n * the client video application.\n * This shall be zero-allocated and available as\n * AVCodecContext.hwaccel_context. All user members can be set once\n * during initialization or through each AVCodecContext.get_buffer()\n * function call. In any case, they must be valid prior to calling\n * decoding functions.\n *\n * Deprecated: use AVCodecContext.hw_frames_ctx instead.\n */\nstruct attribute_deprecated vaapi_context {\n    /**\n     * Window system dependent data\n     *\n     * - encoding: unused\n     * - decoding: Set by user\n     */\n    void *display;\n\n    /**\n     * Configuration ID\n     *\n     * - encoding: unused\n     * - decoding: Set by user\n     */\n    uint32_t config_id;\n\n    /**\n     * Context ID (video decode pipeline)\n     *\n     * - encoding: unused\n     * - decoding: Set by user\n     */\n    uint32_t context_id;\n};\n\n/* @} */\n\n#endif /* FF_API_STRUCT_VAAPI_CONTEXT */\n\n#endif /* AVCODEC_VAAPI_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/vdpau.h",
    "content": "/*\n * The Video Decode and Presentation API for UNIX (VDPAU) is used for\n * hardware-accelerated decoding of MPEG-1/2, H.264 and VC-1.\n *\n * Copyright (C) 2008 NVIDIA\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_VDPAU_H\n#define AVCODEC_VDPAU_H\n\n/**\n * @file\n * @ingroup lavc_codec_hwaccel_vdpau\n * Public libavcodec VDPAU header.\n */\n\n\n/**\n * @defgroup lavc_codec_hwaccel_vdpau VDPAU Decoder and Renderer\n * @ingroup lavc_codec_hwaccel\n *\n * VDPAU hardware acceleration has two modules\n * - VDPAU decoding\n * - VDPAU presentation\n *\n * The VDPAU decoding module parses all headers using FFmpeg\n * parsing mechanisms and uses VDPAU for the actual decoding.\n *\n * As per the current implementation, the actual decoding\n * and rendering (API calls) are done as part of the VDPAU\n * presentation (vo_vdpau.c) module.\n *\n * @{\n */\n\n#include <vdpau/vdpau.h>\n\n#include \"libavutil/avconfig.h\"\n#include \"libavutil/attributes.h\"\n\n#include \"avcodec.h\"\n#include \"version.h\"\n\nstruct AVCodecContext;\nstruct AVFrame;\n\ntypedef int (*AVVDPAU_Render2)(struct AVCodecContext *, struct AVFrame *,\n                               const VdpPictureInfo *, uint32_t,\n                               const VdpBitstreamBuffer *);\n\n/**\n * This structure is used to share data between the libavcodec library and\n * the client video application.\n * The user shall allocate the structure via the av_alloc_vdpau_hwaccel\n * function and make it available as\n * AVCodecContext.hwaccel_context. Members can be set by the user once\n * during initialization or through each AVCodecContext.get_buffer()\n * function call. In any case, they must be valid prior to calling\n * decoding functions.\n *\n * The size of this structure is not a part of the public ABI and must not\n * be used outside of libavcodec. Use av_vdpau_alloc_context() to allocate an\n * AVVDPAUContext.\n */\ntypedef struct AVVDPAUContext {\n    /**\n     * VDPAU decoder handle\n     *\n     * Set by user.\n     */\n    VdpDecoder decoder;\n\n    /**\n     * VDPAU decoder render callback\n     *\n     * Set by the user.\n     */\n    VdpDecoderRender *render;\n\n    AVVDPAU_Render2 render2;\n} AVVDPAUContext;\n\n/**\n * @brief allocation function for AVVDPAUContext\n *\n * Allows extending the struct without breaking API/ABI\n */\nAVVDPAUContext *av_alloc_vdpaucontext(void);\n\nAVVDPAU_Render2 av_vdpau_hwaccel_get_render2(const AVVDPAUContext *);\nvoid av_vdpau_hwaccel_set_render2(AVVDPAUContext *, AVVDPAU_Render2);\n\n/**\n * Associate a VDPAU device with a codec context for hardware acceleration.\n * This function is meant to be called from the get_format() codec callback,\n * or earlier. It can also be called after avcodec_flush_buffers() to change\n * the underlying VDPAU device mid-stream (e.g. to recover from non-transparent\n * display preemption).\n *\n * @note get_format() must return AV_PIX_FMT_VDPAU if this function completes\n * successfully.\n *\n * @param avctx decoding context whose get_format() callback is invoked\n * @param device VDPAU device handle to use for hardware acceleration\n * @param get_proc_address VDPAU device driver\n * @param flags zero of more OR'd AV_HWACCEL_FLAG_* flags\n *\n * @return 0 on success, an AVERROR code on failure.\n */\nint av_vdpau_bind_context(AVCodecContext *avctx, VdpDevice device,\n                          VdpGetProcAddress *get_proc_address, unsigned flags);\n\n/**\n * Gets the parameters to create an adequate VDPAU video surface for the codec\n * context using VDPAU hardware decoding acceleration.\n *\n * @note Behavior is undefined if the context was not successfully bound to a\n * VDPAU device using av_vdpau_bind_context().\n *\n * @param avctx the codec context being used for decoding the stream\n * @param type storage space for the VDPAU video surface chroma type\n *              (or NULL to ignore)\n * @param width storage space for the VDPAU video surface pixel width\n *              (or NULL to ignore)\n * @param height storage space for the VDPAU video surface pixel height\n *              (or NULL to ignore)\n *\n * @return 0 on success, a negative AVERROR code on failure.\n */\nint av_vdpau_get_surface_parameters(AVCodecContext *avctx, VdpChromaType *type,\n                                    uint32_t *width, uint32_t *height);\n\n/**\n * Allocate an AVVDPAUContext.\n *\n * @return Newly-allocated AVVDPAUContext or NULL on failure.\n */\nAVVDPAUContext *av_vdpau_alloc_context(void);\n\n#if FF_API_VDPAU_PROFILE\n/**\n * Get a decoder profile that should be used for initializing a VDPAU decoder.\n * Should be called from the AVCodecContext.get_format() callback.\n *\n * @deprecated Use av_vdpau_bind_context() instead.\n *\n * @param avctx the codec context being used for decoding the stream\n * @param profile a pointer into which the result will be written on success.\n *                The contents of profile are undefined if this function returns\n *                an error.\n *\n * @return 0 on success (non-negative), a negative AVERROR on failure.\n */\nattribute_deprecated\nint av_vdpau_get_profile(AVCodecContext *avctx, VdpDecoderProfile *profile);\n#endif\n\n/* @}*/\n\n#endif /* AVCODEC_VDPAU_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/version.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_VERSION_H\n#define AVCODEC_VERSION_H\n\n/**\n * @file\n * @ingroup libavc\n * Libavcodec version macros.\n */\n\n#include \"libavutil/version.h\"\n\n#define LIBAVCODEC_VERSION_MAJOR  58\n#define LIBAVCODEC_VERSION_MINOR  18\n#define LIBAVCODEC_VERSION_MICRO 100\n\n#define LIBAVCODEC_VERSION_INT  AV_VERSION_INT(LIBAVCODEC_VERSION_MAJOR, \\\n                                               LIBAVCODEC_VERSION_MINOR, \\\n                                               LIBAVCODEC_VERSION_MICRO)\n#define LIBAVCODEC_VERSION      AV_VERSION(LIBAVCODEC_VERSION_MAJOR,    \\\n                                           LIBAVCODEC_VERSION_MINOR,    \\\n                                           LIBAVCODEC_VERSION_MICRO)\n#define LIBAVCODEC_BUILD        LIBAVCODEC_VERSION_INT\n\n#define LIBAVCODEC_IDENT        \"Lavc\" AV_STRINGIFY(LIBAVCODEC_VERSION)\n\n/**\n * FF_API_* defines may be placed below to indicate public API that will be\n * dropped at a future version bump. The defines themselves are not part of\n * the public API and may change, break or disappear at any time.\n *\n * @note, when bumping the major version it is recommended to manually\n * disable each FF_API_* in its own commit instead of disabling them all\n * at once through the bump. This improves the git bisect-ability of the change.\n */\n\n#ifndef FF_API_LOWRES\n#define FF_API_LOWRES            (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_DEBUG_MV\n#define FF_API_DEBUG_MV          (LIBAVCODEC_VERSION_MAJOR < 58)\n#endif\n#ifndef FF_API_AVCTX_TIMEBASE\n#define FF_API_AVCTX_TIMEBASE    (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_CODED_FRAME\n#define FF_API_CODED_FRAME       (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_SIDEDATA_ONLY_PKT\n#define FF_API_SIDEDATA_ONLY_PKT (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_VDPAU_PROFILE\n#define FF_API_VDPAU_PROFILE     (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_CONVERGENCE_DURATION\n#define FF_API_CONVERGENCE_DURATION (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_AVPICTURE\n#define FF_API_AVPICTURE         (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_AVPACKET_OLD_API\n#define FF_API_AVPACKET_OLD_API (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_RTP_CALLBACK\n#define FF_API_RTP_CALLBACK      (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_VBV_DELAY\n#define FF_API_VBV_DELAY         (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_CODER_TYPE\n#define FF_API_CODER_TYPE        (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_STAT_BITS\n#define FF_API_STAT_BITS         (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_PRIVATE_OPT\n#define FF_API_PRIVATE_OPT      (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_ASS_TIMING\n#define FF_API_ASS_TIMING       (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_OLD_BSF\n#define FF_API_OLD_BSF          (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_COPY_CONTEXT\n#define FF_API_COPY_CONTEXT     (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_GET_CONTEXT_DEFAULTS\n#define FF_API_GET_CONTEXT_DEFAULTS (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_NVENC_OLD_NAME\n#define FF_API_NVENC_OLD_NAME    (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_STRUCT_VAAPI_CONTEXT\n#define FF_API_STRUCT_VAAPI_CONTEXT (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_MERGE_SD_API\n#define FF_API_MERGE_SD_API      (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_TAG_STRING\n#define FF_API_TAG_STRING        (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_GETCHROMA\n#define FF_API_GETCHROMA         (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_CODEC_GET_SET\n#define FF_API_CODEC_GET_SET     (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_USER_VISIBLE_AVHWACCEL\n#define FF_API_USER_VISIBLE_AVHWACCEL (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_LOCKMGR\n#define FF_API_LOCKMGR (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_NEXT\n#define FF_API_NEXT              (LIBAVCODEC_VERSION_MAJOR < 59)\n#endif\n\n\n#endif /* AVCODEC_VERSION_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/videotoolbox.h",
    "content": "/*\n * Videotoolbox hardware acceleration\n *\n * copyright (c) 2012 Sebastien Zwickert\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_VIDEOTOOLBOX_H\n#define AVCODEC_VIDEOTOOLBOX_H\n\n/**\n * @file\n * @ingroup lavc_codec_hwaccel_videotoolbox\n * Public libavcodec Videotoolbox header.\n */\n\n#include <stdint.h>\n\n#define Picture QuickdrawPicture\n#include <VideoToolbox/VideoToolbox.h>\n#undef Picture\n\n#include \"libavcodec/avcodec.h\"\n\n/**\n * This struct holds all the information that needs to be passed\n * between the caller and libavcodec for initializing Videotoolbox decoding.\n * Its size is not a part of the public ABI, it must be allocated with\n * av_videotoolbox_alloc_context() and freed with av_free().\n */\ntypedef struct AVVideotoolboxContext {\n    /**\n     * Videotoolbox decompression session object.\n     * Created and freed the caller.\n     */\n    VTDecompressionSessionRef session;\n\n    /**\n     * The output callback that must be passed to the session.\n     * Set by av_videottoolbox_default_init()\n     */\n    VTDecompressionOutputCallback output_callback;\n\n    /**\n     * CVPixelBuffer Format Type that Videotoolbox will use for decoded frames.\n     * set by the caller. If this is set to 0, then no specific format is\n     * requested from the decoder, and its native format is output.\n     */\n    OSType cv_pix_fmt_type;\n\n    /**\n     * CoreMedia Format Description that Videotoolbox will use to create the decompression session.\n     * Set by the caller.\n     */\n    CMVideoFormatDescriptionRef cm_fmt_desc;\n\n    /**\n     * CoreMedia codec type that Videotoolbox will use to create the decompression session.\n     * Set by the caller.\n     */\n    int cm_codec_type;\n} AVVideotoolboxContext;\n\n/**\n * Allocate and initialize a Videotoolbox context.\n *\n * This function should be called from the get_format() callback when the caller\n * selects the AV_PIX_FMT_VIDETOOLBOX format. The caller must then create\n * the decoder object (using the output callback provided by libavcodec) that\n * will be used for Videotoolbox-accelerated decoding.\n *\n * When decoding with Videotoolbox is finished, the caller must destroy the decoder\n * object and free the Videotoolbox context using av_free().\n *\n * @return the newly allocated context or NULL on failure\n */\nAVVideotoolboxContext *av_videotoolbox_alloc_context(void);\n\n/**\n * This is a convenience function that creates and sets up the Videotoolbox context using\n * an internal implementation.\n *\n * @param avctx the corresponding codec context\n *\n * @return >= 0 on success, a negative AVERROR code on failure\n */\nint av_videotoolbox_default_init(AVCodecContext *avctx);\n\n/**\n * This is a convenience function that creates and sets up the Videotoolbox context using\n * an internal implementation.\n *\n * @param avctx the corresponding codec context\n * @param vtctx the Videotoolbox context to use\n *\n * @return >= 0 on success, a negative AVERROR code on failure\n */\nint av_videotoolbox_default_init2(AVCodecContext *avctx, AVVideotoolboxContext *vtctx);\n\n/**\n * This function must be called to free the Videotoolbox context initialized with\n * av_videotoolbox_default_init().\n *\n * @param avctx the corresponding codec context\n */\nvoid av_videotoolbox_default_free(AVCodecContext *avctx);\n\n/**\n * @}\n */\n\n#endif /* AVCODEC_VIDEOTOOLBOX_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/vorbis_parser.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * A public API for Vorbis parsing\n *\n * Determines the duration for each packet.\n */\n\n#ifndef AVCODEC_VORBIS_PARSER_H\n#define AVCODEC_VORBIS_PARSER_H\n\n#include <stdint.h>\n\ntypedef struct AVVorbisParseContext AVVorbisParseContext;\n\n/**\n * Allocate and initialize the Vorbis parser using headers in the extradata.\n */\nAVVorbisParseContext *av_vorbis_parse_init(const uint8_t *extradata,\n                                           int extradata_size);\n\n/**\n * Free the parser and everything associated with it.\n */\nvoid av_vorbis_parse_free(AVVorbisParseContext **s);\n\n#define VORBIS_FLAG_HEADER  0x00000001\n#define VORBIS_FLAG_COMMENT 0x00000002\n#define VORBIS_FLAG_SETUP   0x00000004\n\n/**\n * Get the duration for a Vorbis packet.\n *\n * If @p flags is @c NULL,\n * special frames are considered invalid.\n *\n * @param s        Vorbis parser context\n * @param buf      buffer containing a Vorbis frame\n * @param buf_size size of the buffer\n * @param flags    flags for special frames\n */\nint av_vorbis_parse_frame_flags(AVVorbisParseContext *s, const uint8_t *buf,\n                                int buf_size, int *flags);\n\n/**\n * Get the duration for a Vorbis packet.\n *\n * @param s        Vorbis parser context\n * @param buf      buffer containing a Vorbis frame\n * @param buf_size size of the buffer\n */\nint av_vorbis_parse_frame(AVVorbisParseContext *s, const uint8_t *buf,\n                          int buf_size);\n\nvoid av_vorbis_parse_reset(AVVorbisParseContext *s);\n\n#endif /* AVCODEC_VORBIS_PARSER_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavcodec/xvmc.h",
    "content": "/*\n * Copyright (C) 2003 Ivan Kalvachev\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVCODEC_XVMC_H\n#define AVCODEC_XVMC_H\n\n/**\n * @file\n * @ingroup lavc_codec_hwaccel_xvmc\n * Public libavcodec XvMC header.\n */\n\n#include <X11/extensions/XvMC.h>\n\n#include \"libavutil/attributes.h\"\n#include \"version.h\"\n#include \"avcodec.h\"\n\n/**\n * @defgroup lavc_codec_hwaccel_xvmc XvMC\n * @ingroup lavc_codec_hwaccel\n *\n * @{\n */\n\n#define AV_XVMC_ID                    0x1DC711C0  /**< special value to ensure that regular pixel routines haven't corrupted the struct\n                                                       the number is 1337 speak for the letters IDCT MCo (motion compensation) */\n\nstruct attribute_deprecated xvmc_pix_fmt {\n    /** The field contains the special constant value AV_XVMC_ID.\n        It is used as a test that the application correctly uses the API,\n        and that there is no corruption caused by pixel routines.\n        - application - set during initialization\n        - libavcodec  - unchanged\n    */\n    int             xvmc_id;\n\n    /** Pointer to the block array allocated by XvMCCreateBlocks().\n        The array has to be freed by XvMCDestroyBlocks().\n        Each group of 64 values represents one data block of differential\n        pixel information (in MoCo mode) or coefficients for IDCT.\n        - application - set the pointer during initialization\n        - libavcodec  - fills coefficients/pixel data into the array\n    */\n    short*          data_blocks;\n\n    /** Pointer to the macroblock description array allocated by\n        XvMCCreateMacroBlocks() and freed by XvMCDestroyMacroBlocks().\n        - application - set the pointer during initialization\n        - libavcodec  - fills description data into the array\n    */\n    XvMCMacroBlock* mv_blocks;\n\n    /** Number of macroblock descriptions that can be stored in the mv_blocks\n        array.\n        - application - set during initialization\n        - libavcodec  - unchanged\n    */\n    int             allocated_mv_blocks;\n\n    /** Number of blocks that can be stored at once in the data_blocks array.\n        - application - set during initialization\n        - libavcodec  - unchanged\n    */\n    int             allocated_data_blocks;\n\n    /** Indicate that the hardware would interpret data_blocks as IDCT\n        coefficients and perform IDCT on them.\n        - application - set during initialization\n        - libavcodec  - unchanged\n    */\n    int             idct;\n\n    /** In MoCo mode it indicates that intra macroblocks are assumed to be in\n        unsigned format; same as the XVMC_INTRA_UNSIGNED flag.\n        - application - set during initialization\n        - libavcodec  - unchanged\n    */\n    int             unsigned_intra;\n\n    /** Pointer to the surface allocated by XvMCCreateSurface().\n        It has to be freed by XvMCDestroySurface() on application exit.\n        It identifies the frame and its state on the video hardware.\n        - application - set during initialization\n        - libavcodec  - unchanged\n    */\n    XvMCSurface*    p_surface;\n\n/** Set by the decoder before calling ff_draw_horiz_band(),\n    needed by the XvMCRenderSurface function. */\n//@{\n    /** Pointer to the surface used as past reference\n        - application - unchanged\n        - libavcodec  - set\n    */\n    XvMCSurface*    p_past_surface;\n\n    /** Pointer to the surface used as future reference\n        - application - unchanged\n        - libavcodec  - set\n    */\n    XvMCSurface*    p_future_surface;\n\n    /** top/bottom field or frame\n        - application - unchanged\n        - libavcodec  - set\n    */\n    unsigned int    picture_structure;\n\n    /** XVMC_SECOND_FIELD - 1st or 2nd field in the sequence\n        - application - unchanged\n        - libavcodec  - set\n    */\n    unsigned int    flags;\n//}@\n\n    /** Number of macroblock descriptions in the mv_blocks array\n        that have already been passed to the hardware.\n        - application - zeroes it on get_buffer().\n                        A successful ff_draw_horiz_band() may increment it\n                        with filled_mb_block_num or zero both.\n        - libavcodec  - unchanged\n    */\n    int             start_mv_blocks_num;\n\n    /** Number of new macroblock descriptions in the mv_blocks array (after\n        start_mv_blocks_num) that are filled by libavcodec and have to be\n        passed to the hardware.\n        - application - zeroes it on get_buffer() or after successful\n                        ff_draw_horiz_band().\n        - libavcodec  - increment with one of each stored MB\n    */\n    int             filled_mv_blocks_num;\n\n    /** Number of the next free data block; one data block consists of\n        64 short values in the data_blocks array.\n        All blocks before this one have already been claimed by placing their\n        position into the corresponding block description structure field,\n        that are part of the mv_blocks array.\n        - application - zeroes it on get_buffer().\n                        A successful ff_draw_horiz_band() may zero it together\n                        with start_mb_blocks_num.\n        - libavcodec  - each decoded macroblock increases it by the number\n                        of coded blocks it contains.\n    */\n    int             next_free_data_block_num;\n};\n\n/**\n * @}\n */\n\n#endif /* AVCODEC_XVMC_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavdevice/avdevice.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVDEVICE_AVDEVICE_H\n#define AVDEVICE_AVDEVICE_H\n\n#include \"version.h\"\n\n/**\n * @file\n * @ingroup lavd\n * Main libavdevice API header\n */\n\n/**\n * @defgroup lavd libavdevice\n * Special devices muxing/demuxing library.\n *\n * Libavdevice is a complementary library to @ref libavf \"libavformat\". It\n * provides various \"special\" platform-specific muxers and demuxers, e.g. for\n * grabbing devices, audio capture and playback etc. As a consequence, the\n * (de)muxers in libavdevice are of the AVFMT_NOFILE type (they use their own\n * I/O functions). The filename passed to avformat_open_input() often does not\n * refer to an actually existing file, but has some special device-specific\n * meaning - e.g. for xcbgrab it is the display name.\n *\n * To use libavdevice, simply call avdevice_register_all() to register all\n * compiled muxers and demuxers. They all use standard libavformat API.\n *\n * @{\n */\n\n#include \"libavutil/log.h\"\n#include \"libavutil/opt.h\"\n#include \"libavutil/dict.h\"\n#include \"libavformat/avformat.h\"\n\n/**\n * Return the LIBAVDEVICE_VERSION_INT constant.\n */\nunsigned avdevice_version(void);\n\n/**\n * Return the libavdevice build-time configuration.\n */\nconst char *avdevice_configuration(void);\n\n/**\n * Return the libavdevice license.\n */\nconst char *avdevice_license(void);\n\n/**\n * Initialize libavdevice and register all the input and output devices.\n */\nvoid avdevice_register_all(void);\n\n/**\n * Audio input devices iterator.\n *\n * If d is NULL, returns the first registered input audio/video device,\n * if d is non-NULL, returns the next registered input audio/video device after d\n * or NULL if d is the last one.\n */\nAVInputFormat *av_input_audio_device_next(AVInputFormat  *d);\n\n/**\n * Video input devices iterator.\n *\n * If d is NULL, returns the first registered input audio/video device,\n * if d is non-NULL, returns the next registered input audio/video device after d\n * or NULL if d is the last one.\n */\nAVInputFormat *av_input_video_device_next(AVInputFormat  *d);\n\n/**\n * Audio output devices iterator.\n *\n * If d is NULL, returns the first registered output audio/video device,\n * if d is non-NULL, returns the next registered output audio/video device after d\n * or NULL if d is the last one.\n */\nAVOutputFormat *av_output_audio_device_next(AVOutputFormat *d);\n\n/**\n * Video output devices iterator.\n *\n * If d is NULL, returns the first registered output audio/video device,\n * if d is non-NULL, returns the next registered output audio/video device after d\n * or NULL if d is the last one.\n */\nAVOutputFormat *av_output_video_device_next(AVOutputFormat *d);\n\ntypedef struct AVDeviceRect {\n    int x;      /**< x coordinate of top left corner */\n    int y;      /**< y coordinate of top left corner */\n    int width;  /**< width */\n    int height; /**< height */\n} AVDeviceRect;\n\n/**\n * Message types used by avdevice_app_to_dev_control_message().\n */\nenum AVAppToDevMessageType {\n    /**\n     * Dummy message.\n     */\n    AV_APP_TO_DEV_NONE = MKBETAG('N','O','N','E'),\n\n    /**\n     * Window size change message.\n     *\n     * Message is sent to the device every time the application changes the size\n     * of the window device renders to.\n     * Message should also be sent right after window is created.\n     *\n     * data: AVDeviceRect: new window size.\n     */\n    AV_APP_TO_DEV_WINDOW_SIZE = MKBETAG('G','E','O','M'),\n\n    /**\n     * Repaint request message.\n     *\n     * Message is sent to the device when window has to be repainted.\n     *\n     * data: AVDeviceRect: area required to be repainted.\n     *       NULL: whole area is required to be repainted.\n     */\n    AV_APP_TO_DEV_WINDOW_REPAINT = MKBETAG('R','E','P','A'),\n\n    /**\n     * Request pause/play.\n     *\n     * Application requests pause/unpause playback.\n     * Mostly usable with devices that have internal buffer.\n     * By default devices are not paused.\n     *\n     * data: NULL\n     */\n    AV_APP_TO_DEV_PAUSE        = MKBETAG('P', 'A', 'U', ' '),\n    AV_APP_TO_DEV_PLAY         = MKBETAG('P', 'L', 'A', 'Y'),\n    AV_APP_TO_DEV_TOGGLE_PAUSE = MKBETAG('P', 'A', 'U', 'T'),\n\n    /**\n     * Volume control message.\n     *\n     * Set volume level. It may be device-dependent if volume\n     * is changed per stream or system wide. Per stream volume\n     * change is expected when possible.\n     *\n     * data: double: new volume with range of 0.0 - 1.0.\n     */\n    AV_APP_TO_DEV_SET_VOLUME = MKBETAG('S', 'V', 'O', 'L'),\n\n    /**\n     * Mute control messages.\n     *\n     * Change mute state. It may be device-dependent if mute status\n     * is changed per stream or system wide. Per stream mute status\n     * change is expected when possible.\n     *\n     * data: NULL.\n     */\n    AV_APP_TO_DEV_MUTE        = MKBETAG(' ', 'M', 'U', 'T'),\n    AV_APP_TO_DEV_UNMUTE      = MKBETAG('U', 'M', 'U', 'T'),\n    AV_APP_TO_DEV_TOGGLE_MUTE = MKBETAG('T', 'M', 'U', 'T'),\n\n    /**\n     * Get volume/mute messages.\n     *\n     * Force the device to send AV_DEV_TO_APP_VOLUME_LEVEL_CHANGED or\n     * AV_DEV_TO_APP_MUTE_STATE_CHANGED command respectively.\n     *\n     * data: NULL.\n     */\n    AV_APP_TO_DEV_GET_VOLUME = MKBETAG('G', 'V', 'O', 'L'),\n    AV_APP_TO_DEV_GET_MUTE   = MKBETAG('G', 'M', 'U', 'T'),\n};\n\n/**\n * Message types used by avdevice_dev_to_app_control_message().\n */\nenum AVDevToAppMessageType {\n    /**\n     * Dummy message.\n     */\n    AV_DEV_TO_APP_NONE = MKBETAG('N','O','N','E'),\n\n    /**\n     * Create window buffer message.\n     *\n     * Device requests to create a window buffer. Exact meaning is device-\n     * and application-dependent. Message is sent before rendering first\n     * frame and all one-shot initializations should be done here.\n     * Application is allowed to ignore preferred window buffer size.\n     *\n     * @note: Application is obligated to inform about window buffer size\n     *        with AV_APP_TO_DEV_WINDOW_SIZE message.\n     *\n     * data: AVDeviceRect: preferred size of the window buffer.\n     *       NULL: no preferred size of the window buffer.\n     */\n    AV_DEV_TO_APP_CREATE_WINDOW_BUFFER = MKBETAG('B','C','R','E'),\n\n    /**\n     * Prepare window buffer message.\n     *\n     * Device requests to prepare a window buffer for rendering.\n     * Exact meaning is device- and application-dependent.\n     * Message is sent before rendering of each frame.\n     *\n     * data: NULL.\n     */\n    AV_DEV_TO_APP_PREPARE_WINDOW_BUFFER = MKBETAG('B','P','R','E'),\n\n    /**\n     * Display window buffer message.\n     *\n     * Device requests to display a window buffer.\n     * Message is sent when new frame is ready to be displayed.\n     * Usually buffers need to be swapped in handler of this message.\n     *\n     * data: NULL.\n     */\n    AV_DEV_TO_APP_DISPLAY_WINDOW_BUFFER = MKBETAG('B','D','I','S'),\n\n    /**\n     * Destroy window buffer message.\n     *\n     * Device requests to destroy a window buffer.\n     * Message is sent when device is about to be destroyed and window\n     * buffer is not required anymore.\n     *\n     * data: NULL.\n     */\n    AV_DEV_TO_APP_DESTROY_WINDOW_BUFFER = MKBETAG('B','D','E','S'),\n\n    /**\n     * Buffer fullness status messages.\n     *\n     * Device signals buffer overflow/underflow.\n     *\n     * data: NULL.\n     */\n    AV_DEV_TO_APP_BUFFER_OVERFLOW = MKBETAG('B','O','F','L'),\n    AV_DEV_TO_APP_BUFFER_UNDERFLOW = MKBETAG('B','U','F','L'),\n\n    /**\n     * Buffer readable/writable.\n     *\n     * Device informs that buffer is readable/writable.\n     * When possible, device informs how many bytes can be read/write.\n     *\n     * @warning Device may not inform when number of bytes than can be read/write changes.\n     *\n     * data: int64_t: amount of bytes available to read/write.\n     *       NULL: amount of bytes available to read/write is not known.\n     */\n    AV_DEV_TO_APP_BUFFER_READABLE = MKBETAG('B','R','D',' '),\n    AV_DEV_TO_APP_BUFFER_WRITABLE = MKBETAG('B','W','R',' '),\n\n    /**\n     * Mute state change message.\n     *\n     * Device informs that mute state has changed.\n     *\n     * data: int: 0 for not muted state, non-zero for muted state.\n     */\n    AV_DEV_TO_APP_MUTE_STATE_CHANGED = MKBETAG('C','M','U','T'),\n\n    /**\n     * Volume level change message.\n     *\n     * Device informs that volume level has changed.\n     *\n     * data: double: new volume with range of 0.0 - 1.0.\n     */\n    AV_DEV_TO_APP_VOLUME_LEVEL_CHANGED = MKBETAG('C','V','O','L'),\n};\n\n/**\n * Send control message from application to device.\n *\n * @param s         device context.\n * @param type      message type.\n * @param data      message data. Exact type depends on message type.\n * @param data_size size of message data.\n * @return >= 0 on success, negative on error.\n *         AVERROR(ENOSYS) when device doesn't implement handler of the message.\n */\nint avdevice_app_to_dev_control_message(struct AVFormatContext *s,\n                                        enum AVAppToDevMessageType type,\n                                        void *data, size_t data_size);\n\n/**\n * Send control message from device to application.\n *\n * @param s         device context.\n * @param type      message type.\n * @param data      message data. Can be NULL.\n * @param data_size size of message data.\n * @return >= 0 on success, negative on error.\n *         AVERROR(ENOSYS) when application doesn't implement handler of the message.\n */\nint avdevice_dev_to_app_control_message(struct AVFormatContext *s,\n                                        enum AVDevToAppMessageType type,\n                                        void *data, size_t data_size);\n\n/**\n * Following API allows user to probe device capabilities (supported codecs,\n * pixel formats, sample formats, resolutions, channel counts, etc).\n * It is build on top op AVOption API.\n * Queried capabilities make it possible to set up converters of video or audio\n * parameters that fit to the device.\n *\n * List of capabilities that can be queried:\n *  - Capabilities valid for both audio and video devices:\n *    - codec:          supported audio/video codecs.\n *                      type: AV_OPT_TYPE_INT (AVCodecID value)\n *  - Capabilities valid for audio devices:\n *    - sample_format:  supported sample formats.\n *                      type: AV_OPT_TYPE_INT (AVSampleFormat value)\n *    - sample_rate:    supported sample rates.\n *                      type: AV_OPT_TYPE_INT\n *    - channels:       supported number of channels.\n *                      type: AV_OPT_TYPE_INT\n *    - channel_layout: supported channel layouts.\n *                      type: AV_OPT_TYPE_INT64\n *  - Capabilities valid for video devices:\n *    - pixel_format:   supported pixel formats.\n *                      type: AV_OPT_TYPE_INT (AVPixelFormat value)\n *    - window_size:    supported window sizes (describes size of the window size presented to the user).\n *                      type: AV_OPT_TYPE_IMAGE_SIZE\n *    - frame_size:     supported frame sizes (describes size of provided video frames).\n *                      type: AV_OPT_TYPE_IMAGE_SIZE\n *    - fps:            supported fps values\n *                      type: AV_OPT_TYPE_RATIONAL\n *\n * Value of the capability may be set by user using av_opt_set() function\n * and AVDeviceCapabilitiesQuery object. Following queries will\n * limit results to the values matching already set capabilities.\n * For example, setting a codec may impact number of formats or fps values\n * returned during next query. Setting invalid value may limit results to zero.\n *\n * Example of the usage basing on opengl output device:\n *\n * @code\n *  AVFormatContext *oc = NULL;\n *  AVDeviceCapabilitiesQuery *caps = NULL;\n *  AVOptionRanges *ranges;\n *  int ret;\n *\n *  if ((ret = avformat_alloc_output_context2(&oc, NULL, \"opengl\", NULL)) < 0)\n *      goto fail;\n *  if (avdevice_capabilities_create(&caps, oc, NULL) < 0)\n *      goto fail;\n *\n *  //query codecs\n *  if (av_opt_query_ranges(&ranges, caps, \"codec\", AV_OPT_MULTI_COMPONENT_RANGE)) < 0)\n *      goto fail;\n *  //pick codec here and set it\n *  av_opt_set(caps, \"codec\", AV_CODEC_ID_RAWVIDEO, 0);\n *\n *  //query format\n *  if (av_opt_query_ranges(&ranges, caps, \"pixel_format\", AV_OPT_MULTI_COMPONENT_RANGE)) < 0)\n *      goto fail;\n *  //pick format here and set it\n *  av_opt_set(caps, \"pixel_format\", AV_PIX_FMT_YUV420P, 0);\n *\n *  //query and set more capabilities\n *\n * fail:\n *  //clean up code\n *  avdevice_capabilities_free(&query, oc);\n *  avformat_free_context(oc);\n * @endcode\n */\n\n/**\n * Structure describes device capabilities.\n *\n * It is used by devices in conjunction with av_device_capabilities AVOption table\n * to implement capabilities probing API based on AVOption API. Should not be used directly.\n */\ntypedef struct AVDeviceCapabilitiesQuery {\n    const AVClass *av_class;\n    AVFormatContext *device_context;\n    enum AVCodecID codec;\n    enum AVSampleFormat sample_format;\n    enum AVPixelFormat pixel_format;\n    int sample_rate;\n    int channels;\n    int64_t channel_layout;\n    int window_width;\n    int window_height;\n    int frame_width;\n    int frame_height;\n    AVRational fps;\n} AVDeviceCapabilitiesQuery;\n\n/**\n * AVOption table used by devices to implement device capabilities API. Should not be used by a user.\n */\nextern const AVOption av_device_capabilities[];\n\n/**\n * Initialize capabilities probing API based on AVOption API.\n *\n * avdevice_capabilities_free() must be called when query capabilities API is\n * not used anymore.\n *\n * @param[out] caps      Device capabilities data. Pointer to a NULL pointer must be passed.\n * @param s              Context of the device.\n * @param device_options An AVDictionary filled with device-private options.\n *                       On return this parameter will be destroyed and replaced with a dict\n *                       containing options that were not found. May be NULL.\n *                       The same options must be passed later to avformat_write_header() for output\n *                       devices or avformat_open_input() for input devices, or at any other place\n *                       that affects device-private options.\n *\n * @return >= 0 on success, negative otherwise.\n */\nint avdevice_capabilities_create(AVDeviceCapabilitiesQuery **caps, AVFormatContext *s,\n                                 AVDictionary **device_options);\n\n/**\n * Free resources created by avdevice_capabilities_create()\n *\n * @param caps Device capabilities data to be freed.\n * @param s    Context of the device.\n */\nvoid avdevice_capabilities_free(AVDeviceCapabilitiesQuery **caps, AVFormatContext *s);\n\n/**\n * Structure describes basic parameters of the device.\n */\ntypedef struct AVDeviceInfo {\n    char *device_name;                   /**< device name, format depends on device */\n    char *device_description;            /**< human friendly name */\n} AVDeviceInfo;\n\n/**\n * List of devices.\n */\ntypedef struct AVDeviceInfoList {\n    AVDeviceInfo **devices;              /**< list of autodetected devices */\n    int nb_devices;                      /**< number of autodetected devices */\n    int default_device;                  /**< index of default device or -1 if no default */\n} AVDeviceInfoList;\n\n/**\n * List devices.\n *\n * Returns available device names and their parameters.\n *\n * @note: Some devices may accept system-dependent device names that cannot be\n *        autodetected. The list returned by this function cannot be assumed to\n *        be always completed.\n *\n * @param s                device context.\n * @param[out] device_list list of autodetected devices.\n * @return count of autodetected devices, negative on error.\n */\nint avdevice_list_devices(struct AVFormatContext *s, AVDeviceInfoList **device_list);\n\n/**\n * Convenient function to free result of avdevice_list_devices().\n *\n * @param devices device list to be freed.\n */\nvoid avdevice_free_list_devices(AVDeviceInfoList **device_list);\n\n/**\n * List devices.\n *\n * Returns available device names and their parameters.\n * These are convinient wrappers for avdevice_list_devices().\n * Device context is allocated and deallocated internally.\n *\n * @param device           device format. May be NULL if device name is set.\n * @param device_name      device name. May be NULL if device format is set.\n * @param device_options   An AVDictionary filled with device-private options. May be NULL.\n *                         The same options must be passed later to avformat_write_header() for output\n *                         devices or avformat_open_input() for input devices, or at any other place\n *                         that affects device-private options.\n * @param[out] device_list list of autodetected devices\n * @return count of autodetected devices, negative on error.\n * @note device argument takes precedence over device_name when both are set.\n */\nint avdevice_list_input_sources(struct AVInputFormat *device, const char *device_name,\n                                AVDictionary *device_options, AVDeviceInfoList **device_list);\nint avdevice_list_output_sinks(struct AVOutputFormat *device, const char *device_name,\n                               AVDictionary *device_options, AVDeviceInfoList **device_list);\n\n/**\n * @}\n */\n\n#endif /* AVDEVICE_AVDEVICE_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavdevice/version.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVDEVICE_VERSION_H\n#define AVDEVICE_VERSION_H\n\n/**\n * @file\n * @ingroup lavd\n * Libavdevice version macros\n */\n\n#include \"libavutil/version.h\"\n\n#define LIBAVDEVICE_VERSION_MAJOR  58\n#define LIBAVDEVICE_VERSION_MINOR   3\n#define LIBAVDEVICE_VERSION_MICRO 100\n\n#define LIBAVDEVICE_VERSION_INT AV_VERSION_INT(LIBAVDEVICE_VERSION_MAJOR, \\\n                                               LIBAVDEVICE_VERSION_MINOR, \\\n                                               LIBAVDEVICE_VERSION_MICRO)\n#define LIBAVDEVICE_VERSION     AV_VERSION(LIBAVDEVICE_VERSION_MAJOR, \\\n                                           LIBAVDEVICE_VERSION_MINOR, \\\n                                           LIBAVDEVICE_VERSION_MICRO)\n#define LIBAVDEVICE_BUILD       LIBAVDEVICE_VERSION_INT\n\n#define LIBAVDEVICE_IDENT       \"Lavd\" AV_STRINGIFY(LIBAVDEVICE_VERSION)\n\n/**\n * FF_API_* defines may be placed below to indicate public API that will be\n * dropped at a future version bump. The defines themselves are not part of\n * the public API and may change, break or disappear at any time.\n */\n\n#endif /* AVDEVICE_VERSION_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavfilter/avfilter.h",
    "content": "/*\n * filter layer\n * Copyright (c) 2007 Bobby Bingham\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVFILTER_AVFILTER_H\n#define AVFILTER_AVFILTER_H\n\n/**\n * @file\n * @ingroup lavfi\n * Main libavfilter public API header\n */\n\n/**\n * @defgroup lavfi libavfilter\n * Graph-based frame editing library.\n *\n * @{\n */\n\n#include <stddef.h>\n\n#include \"libavutil/attributes.h\"\n#include \"libavutil/avutil.h\"\n#include \"libavutil/buffer.h\"\n#include \"libavutil/dict.h\"\n#include \"libavutil/frame.h\"\n#include \"libavutil/log.h\"\n#include \"libavutil/samplefmt.h\"\n#include \"libavutil/pixfmt.h\"\n#include \"libavutil/rational.h\"\n\n#include \"libavfilter/version.h\"\n\n/**\n * Return the LIBAVFILTER_VERSION_INT constant.\n */\nunsigned avfilter_version(void);\n\n/**\n * Return the libavfilter build-time configuration.\n */\nconst char *avfilter_configuration(void);\n\n/**\n * Return the libavfilter license.\n */\nconst char *avfilter_license(void);\n\ntypedef struct AVFilterContext AVFilterContext;\ntypedef struct AVFilterLink    AVFilterLink;\ntypedef struct AVFilterPad     AVFilterPad;\ntypedef struct AVFilterFormats AVFilterFormats;\n\n/**\n * Get the number of elements in a NULL-terminated array of AVFilterPads (e.g.\n * AVFilter.inputs/outputs).\n */\nint avfilter_pad_count(const AVFilterPad *pads);\n\n/**\n * Get the name of an AVFilterPad.\n *\n * @param pads an array of AVFilterPads\n * @param pad_idx index of the pad in the array it; is the caller's\n *                responsibility to ensure the index is valid\n *\n * @return name of the pad_idx'th pad in pads\n */\nconst char *avfilter_pad_get_name(const AVFilterPad *pads, int pad_idx);\n\n/**\n * Get the type of an AVFilterPad.\n *\n * @param pads an array of AVFilterPads\n * @param pad_idx index of the pad in the array; it is the caller's\n *                responsibility to ensure the index is valid\n *\n * @return type of the pad_idx'th pad in pads\n */\nenum AVMediaType avfilter_pad_get_type(const AVFilterPad *pads, int pad_idx);\n\n/**\n * The number of the filter inputs is not determined just by AVFilter.inputs.\n * The filter might add additional inputs during initialization depending on the\n * options supplied to it.\n */\n#define AVFILTER_FLAG_DYNAMIC_INPUTS        (1 << 0)\n/**\n * The number of the filter outputs is not determined just by AVFilter.outputs.\n * The filter might add additional outputs during initialization depending on\n * the options supplied to it.\n */\n#define AVFILTER_FLAG_DYNAMIC_OUTPUTS       (1 << 1)\n/**\n * The filter supports multithreading by splitting frames into multiple parts\n * and processing them concurrently.\n */\n#define AVFILTER_FLAG_SLICE_THREADS         (1 << 2)\n/**\n * Some filters support a generic \"enable\" expression option that can be used\n * to enable or disable a filter in the timeline. Filters supporting this\n * option have this flag set. When the enable expression is false, the default\n * no-op filter_frame() function is called in place of the filter_frame()\n * callback defined on each input pad, thus the frame is passed unchanged to\n * the next filters.\n */\n#define AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC  (1 << 16)\n/**\n * Same as AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC, except that the filter will\n * have its filter_frame() callback(s) called as usual even when the enable\n * expression is false. The filter will disable filtering within the\n * filter_frame() callback(s) itself, for example executing code depending on\n * the AVFilterContext->is_disabled value.\n */\n#define AVFILTER_FLAG_SUPPORT_TIMELINE_INTERNAL (1 << 17)\n/**\n * Handy mask to test whether the filter supports or no the timeline feature\n * (internally or generically).\n */\n#define AVFILTER_FLAG_SUPPORT_TIMELINE (AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC | AVFILTER_FLAG_SUPPORT_TIMELINE_INTERNAL)\n\n/**\n * Filter definition. This defines the pads a filter contains, and all the\n * callback functions used to interact with the filter.\n */\ntypedef struct AVFilter {\n    /**\n     * Filter name. Must be non-NULL and unique among filters.\n     */\n    const char *name;\n\n    /**\n     * A description of the filter. May be NULL.\n     *\n     * You should use the NULL_IF_CONFIG_SMALL() macro to define it.\n     */\n    const char *description;\n\n    /**\n     * List of inputs, terminated by a zeroed element.\n     *\n     * NULL if there are no (static) inputs. Instances of filters with\n     * AVFILTER_FLAG_DYNAMIC_INPUTS set may have more inputs than present in\n     * this list.\n     */\n    const AVFilterPad *inputs;\n    /**\n     * List of outputs, terminated by a zeroed element.\n     *\n     * NULL if there are no (static) outputs. Instances of filters with\n     * AVFILTER_FLAG_DYNAMIC_OUTPUTS set may have more outputs than present in\n     * this list.\n     */\n    const AVFilterPad *outputs;\n\n    /**\n     * A class for the private data, used to declare filter private AVOptions.\n     * This field is NULL for filters that do not declare any options.\n     *\n     * If this field is non-NULL, the first member of the filter private data\n     * must be a pointer to AVClass, which will be set by libavfilter generic\n     * code to this class.\n     */\n    const AVClass *priv_class;\n\n    /**\n     * A combination of AVFILTER_FLAG_*\n     */\n    int flags;\n\n    /*****************************************************************\n     * All fields below this line are not part of the public API. They\n     * may not be used outside of libavfilter and can be changed and\n     * removed at will.\n     * New public fields should be added right above.\n     *****************************************************************\n     */\n\n    /**\n     * Filter pre-initialization function\n     *\n     * This callback will be called immediately after the filter context is\n     * allocated, to allow allocating and initing sub-objects.\n     *\n     * If this callback is not NULL, the uninit callback will be called on\n     * allocation failure.\n     *\n     * @return 0 on success,\n     *         AVERROR code on failure (but the code will be\n     *           dropped and treated as ENOMEM by the calling code)\n     */\n    int (*preinit)(AVFilterContext *ctx);\n\n    /**\n     * Filter initialization function.\n     *\n     * This callback will be called only once during the filter lifetime, after\n     * all the options have been set, but before links between filters are\n     * established and format negotiation is done.\n     *\n     * Basic filter initialization should be done here. Filters with dynamic\n     * inputs and/or outputs should create those inputs/outputs here based on\n     * provided options. No more changes to this filter's inputs/outputs can be\n     * done after this callback.\n     *\n     * This callback must not assume that the filter links exist or frame\n     * parameters are known.\n     *\n     * @ref AVFilter.uninit \"uninit\" is guaranteed to be called even if\n     * initialization fails, so this callback does not have to clean up on\n     * failure.\n     *\n     * @return 0 on success, a negative AVERROR on failure\n     */\n    int (*init)(AVFilterContext *ctx);\n\n    /**\n     * Should be set instead of @ref AVFilter.init \"init\" by the filters that\n     * want to pass a dictionary of AVOptions to nested contexts that are\n     * allocated during init.\n     *\n     * On return, the options dict should be freed and replaced with one that\n     * contains all the options which could not be processed by this filter (or\n     * with NULL if all the options were processed).\n     *\n     * Otherwise the semantics is the same as for @ref AVFilter.init \"init\".\n     */\n    int (*init_dict)(AVFilterContext *ctx, AVDictionary **options);\n\n    /**\n     * Filter uninitialization function.\n     *\n     * Called only once right before the filter is freed. Should deallocate any\n     * memory held by the filter, release any buffer references, etc. It does\n     * not need to deallocate the AVFilterContext.priv memory itself.\n     *\n     * This callback may be called even if @ref AVFilter.init \"init\" was not\n     * called or failed, so it must be prepared to handle such a situation.\n     */\n    void (*uninit)(AVFilterContext *ctx);\n\n    /**\n     * Query formats supported by the filter on its inputs and outputs.\n     *\n     * This callback is called after the filter is initialized (so the inputs\n     * and outputs are fixed), shortly before the format negotiation. This\n     * callback may be called more than once.\n     *\n     * This callback must set AVFilterLink.out_formats on every input link and\n     * AVFilterLink.in_formats on every output link to a list of pixel/sample\n     * formats that the filter supports on that link. For audio links, this\n     * filter must also set @ref AVFilterLink.in_samplerates \"in_samplerates\" /\n     * @ref AVFilterLink.out_samplerates \"out_samplerates\" and\n     * @ref AVFilterLink.in_channel_layouts \"in_channel_layouts\" /\n     * @ref AVFilterLink.out_channel_layouts \"out_channel_layouts\" analogously.\n     *\n     * This callback may be NULL for filters with one input, in which case\n     * libavfilter assumes that it supports all input formats and preserves\n     * them on output.\n     *\n     * @return zero on success, a negative value corresponding to an\n     * AVERROR code otherwise\n     */\n    int (*query_formats)(AVFilterContext *);\n\n    int priv_size;      ///< size of private data to allocate for the filter\n\n    int flags_internal; ///< Additional flags for avfilter internal use only.\n\n    /**\n     * Used by the filter registration system. Must not be touched by any other\n     * code.\n     */\n    struct AVFilter *next;\n\n    /**\n     * Make the filter instance process a command.\n     *\n     * @param cmd    the command to process, for handling simplicity all commands must be alphanumeric only\n     * @param arg    the argument for the command\n     * @param res    a buffer with size res_size where the filter(s) can return a response. This must not change when the command is not supported.\n     * @param flags  if AVFILTER_CMD_FLAG_FAST is set and the command would be\n     *               time consuming then a filter should treat it like an unsupported command\n     *\n     * @returns >=0 on success otherwise an error code.\n     *          AVERROR(ENOSYS) on unsupported commands\n     */\n    int (*process_command)(AVFilterContext *, const char *cmd, const char *arg, char *res, int res_len, int flags);\n\n    /**\n     * Filter initialization function, alternative to the init()\n     * callback. Args contains the user-supplied parameters, opaque is\n     * used for providing binary data.\n     */\n    int (*init_opaque)(AVFilterContext *ctx, void *opaque);\n\n    /**\n     * Filter activation function.\n     *\n     * Called when any processing is needed from the filter, instead of any\n     * filter_frame and request_frame on pads.\n     *\n     * The function must examine inlinks and outlinks and perform a single\n     * step of processing. If there is nothing to do, the function must do\n     * nothing and not return an error. If more steps are or may be\n     * possible, it must use ff_filter_set_ready() to schedule another\n     * activation.\n     */\n    int (*activate)(AVFilterContext *ctx);\n} AVFilter;\n\n/**\n * Process multiple parts of the frame concurrently.\n */\n#define AVFILTER_THREAD_SLICE (1 << 0)\n\ntypedef struct AVFilterInternal AVFilterInternal;\n\n/** An instance of a filter */\nstruct AVFilterContext {\n    const AVClass *av_class;        ///< needed for av_log() and filters common options\n\n    const AVFilter *filter;         ///< the AVFilter of which this is an instance\n\n    char *name;                     ///< name of this filter instance\n\n    AVFilterPad   *input_pads;      ///< array of input pads\n    AVFilterLink **inputs;          ///< array of pointers to input links\n    unsigned    nb_inputs;          ///< number of input pads\n\n    AVFilterPad   *output_pads;     ///< array of output pads\n    AVFilterLink **outputs;         ///< array of pointers to output links\n    unsigned    nb_outputs;         ///< number of output pads\n\n    void *priv;                     ///< private data for use by the filter\n\n    struct AVFilterGraph *graph;    ///< filtergraph this filter belongs to\n\n    /**\n     * Type of multithreading being allowed/used. A combination of\n     * AVFILTER_THREAD_* flags.\n     *\n     * May be set by the caller before initializing the filter to forbid some\n     * or all kinds of multithreading for this filter. The default is allowing\n     * everything.\n     *\n     * When the filter is initialized, this field is combined using bit AND with\n     * AVFilterGraph.thread_type to get the final mask used for determining\n     * allowed threading types. I.e. a threading type needs to be set in both\n     * to be allowed.\n     *\n     * After the filter is initialized, libavfilter sets this field to the\n     * threading type that is actually used (0 for no multithreading).\n     */\n    int thread_type;\n\n    /**\n     * An opaque struct for libavfilter internal use.\n     */\n    AVFilterInternal *internal;\n\n    struct AVFilterCommand *command_queue;\n\n    char *enable_str;               ///< enable expression string\n    void *enable;                   ///< parsed expression (AVExpr*)\n    double *var_values;             ///< variable values for the enable expression\n    int is_disabled;                ///< the enabled state from the last expression evaluation\n\n    /**\n     * For filters which will create hardware frames, sets the device the\n     * filter should create them in.  All other filters will ignore this field:\n     * in particular, a filter which consumes or processes hardware frames will\n     * instead use the hw_frames_ctx field in AVFilterLink to carry the\n     * hardware context information.\n     */\n    AVBufferRef *hw_device_ctx;\n\n    /**\n     * Max number of threads allowed in this filter instance.\n     * If <= 0, its value is ignored.\n     * Overrides global number of threads set per filter graph.\n     */\n    int nb_threads;\n\n    /**\n     * Ready status of the filter.\n     * A non-0 value means that the filter needs activating;\n     * a higher value suggests a more urgent activation.\n     */\n    unsigned ready;\n\n    /**\n     * Sets the number of extra hardware frames which the filter will\n     * allocate on its output links for use in following filters or by\n     * the caller.\n     *\n     * Some hardware filters require all frames that they will use for\n     * output to be defined in advance before filtering starts.  For such\n     * filters, any hardware frame pools used for output must therefore be\n     * of fixed size.  The extra frames set here are on top of any number\n     * that the filter needs internally in order to operate normally.\n     *\n     * This field must be set before the graph containing this filter is\n     * configured.\n     */\n    int extra_hw_frames;\n};\n\n/**\n * A link between two filters. This contains pointers to the source and\n * destination filters between which this link exists, and the indexes of\n * the pads involved. In addition, this link also contains the parameters\n * which have been negotiated and agreed upon between the filter, such as\n * image dimensions, format, etc.\n *\n * Applications must not normally access the link structure directly.\n * Use the buffersrc and buffersink API instead.\n * In the future, access to the header may be reserved for filters\n * implementation.\n */\nstruct AVFilterLink {\n    AVFilterContext *src;       ///< source filter\n    AVFilterPad *srcpad;        ///< output pad on the source filter\n\n    AVFilterContext *dst;       ///< dest filter\n    AVFilterPad *dstpad;        ///< input pad on the dest filter\n\n    enum AVMediaType type;      ///< filter media type\n\n    /* These parameters apply only to video */\n    int w;                      ///< agreed upon image width\n    int h;                      ///< agreed upon image height\n    AVRational sample_aspect_ratio; ///< agreed upon sample aspect ratio\n    /* These parameters apply only to audio */\n    uint64_t channel_layout;    ///< channel layout of current buffer (see libavutil/channel_layout.h)\n    int sample_rate;            ///< samples per second\n\n    int format;                 ///< agreed upon media format\n\n    /**\n     * Define the time base used by the PTS of the frames/samples\n     * which will pass through this link.\n     * During the configuration stage, each filter is supposed to\n     * change only the output timebase, while the timebase of the\n     * input link is assumed to be an unchangeable property.\n     */\n    AVRational time_base;\n\n    /*****************************************************************\n     * All fields below this line are not part of the public API. They\n     * may not be used outside of libavfilter and can be changed and\n     * removed at will.\n     * New public fields should be added right above.\n     *****************************************************************\n     */\n    /**\n     * Lists of formats and channel layouts supported by the input and output\n     * filters respectively. These lists are used for negotiating the format\n     * to actually be used, which will be loaded into the format and\n     * channel_layout members, above, when chosen.\n     *\n     */\n    AVFilterFormats *in_formats;\n    AVFilterFormats *out_formats;\n\n    /**\n     * Lists of channel layouts and sample rates used for automatic\n     * negotiation.\n     */\n    AVFilterFormats  *in_samplerates;\n    AVFilterFormats *out_samplerates;\n    struct AVFilterChannelLayouts  *in_channel_layouts;\n    struct AVFilterChannelLayouts *out_channel_layouts;\n\n    /**\n     * Audio only, the destination filter sets this to a non-zero value to\n     * request that buffers with the given number of samples should be sent to\n     * it. AVFilterPad.needs_fifo must also be set on the corresponding input\n     * pad.\n     * Last buffer before EOF will be padded with silence.\n     */\n    int request_samples;\n\n    /** stage of the initialization of the link properties (dimensions, etc) */\n    enum {\n        AVLINK_UNINIT = 0,      ///< not started\n        AVLINK_STARTINIT,       ///< started, but incomplete\n        AVLINK_INIT             ///< complete\n    } init_state;\n\n    /**\n     * Graph the filter belongs to.\n     */\n    struct AVFilterGraph *graph;\n\n    /**\n     * Current timestamp of the link, as defined by the most recent\n     * frame(s), in link time_base units.\n     */\n    int64_t current_pts;\n\n    /**\n     * Current timestamp of the link, as defined by the most recent\n     * frame(s), in AV_TIME_BASE units.\n     */\n    int64_t current_pts_us;\n\n    /**\n     * Index in the age array.\n     */\n    int age_index;\n\n    /**\n     * Frame rate of the stream on the link, or 1/0 if unknown or variable;\n     * if left to 0/0, will be automatically copied from the first input\n     * of the source filter if it exists.\n     *\n     * Sources should set it to the best estimation of the real frame rate.\n     * If the source frame rate is unknown or variable, set this to 1/0.\n     * Filters should update it if necessary depending on their function.\n     * Sinks can use it to set a default output frame rate.\n     * It is similar to the r_frame_rate field in AVStream.\n     */\n    AVRational frame_rate;\n\n    /**\n     * Buffer partially filled with samples to achieve a fixed/minimum size.\n     */\n    AVFrame *partial_buf;\n\n    /**\n     * Size of the partial buffer to allocate.\n     * Must be between min_samples and max_samples.\n     */\n    int partial_buf_size;\n\n    /**\n     * Minimum number of samples to filter at once. If filter_frame() is\n     * called with fewer samples, it will accumulate them in partial_buf.\n     * This field and the related ones must not be changed after filtering\n     * has started.\n     * If 0, all related fields are ignored.\n     */\n    int min_samples;\n\n    /**\n     * Maximum number of samples to filter at once. If filter_frame() is\n     * called with more samples, it will split them.\n     */\n    int max_samples;\n\n    /**\n     * Number of channels.\n     */\n    int channels;\n\n    /**\n     * Link processing flags.\n     */\n    unsigned flags;\n\n    /**\n     * Number of past frames sent through the link.\n     */\n    int64_t frame_count_in, frame_count_out;\n\n    /**\n     * A pointer to a FFFramePool struct.\n     */\n    void *frame_pool;\n\n    /**\n     * True if a frame is currently wanted on the output of this filter.\n     * Set when ff_request_frame() is called by the output,\n     * cleared when a frame is filtered.\n     */\n    int frame_wanted_out;\n\n    /**\n     * For hwaccel pixel formats, this should be a reference to the\n     * AVHWFramesContext describing the frames.\n     */\n    AVBufferRef *hw_frames_ctx;\n\n#ifndef FF_INTERNAL_FIELDS\n\n    /**\n     * Internal structure members.\n     * The fields below this limit are internal for libavfilter's use\n     * and must in no way be accessed by applications.\n     */\n    char reserved[0xF000];\n\n#else /* FF_INTERNAL_FIELDS */\n\n    /**\n     * Queue of frames waiting to be filtered.\n     */\n    FFFrameQueue fifo;\n\n    /**\n     * If set, the source filter can not generate a frame as is.\n     * The goal is to avoid repeatedly calling the request_frame() method on\n     * the same link.\n     */\n    int frame_blocked_in;\n\n    /**\n     * Link input status.\n     * If not zero, all attempts of filter_frame will fail with the\n     * corresponding code.\n     */\n    int status_in;\n\n    /**\n     * Timestamp of the input status change.\n     */\n    int64_t status_in_pts;\n\n    /**\n     * Link output status.\n     * If not zero, all attempts of request_frame will fail with the\n     * corresponding code.\n     */\n    int status_out;\n\n#endif /* FF_INTERNAL_FIELDS */\n\n};\n\n/**\n * Link two filters together.\n *\n * @param src    the source filter\n * @param srcpad index of the output pad on the source filter\n * @param dst    the destination filter\n * @param dstpad index of the input pad on the destination filter\n * @return       zero on success\n */\nint avfilter_link(AVFilterContext *src, unsigned srcpad,\n                  AVFilterContext *dst, unsigned dstpad);\n\n/**\n * Free the link in *link, and set its pointer to NULL.\n */\nvoid avfilter_link_free(AVFilterLink **link);\n\n#if FF_API_FILTER_GET_SET\n/**\n * Get the number of channels of a link.\n * @deprecated Use av_buffersink_get_channels()\n */\nattribute_deprecated\nint avfilter_link_get_channels(AVFilterLink *link);\n#endif\n\n/**\n * Set the closed field of a link.\n * @deprecated applications are not supposed to mess with links, they should\n * close the sinks.\n */\nattribute_deprecated\nvoid avfilter_link_set_closed(AVFilterLink *link, int closed);\n\n/**\n * Negotiate the media format, dimensions, etc of all inputs to a filter.\n *\n * @param filter the filter to negotiate the properties for its inputs\n * @return       zero on successful negotiation\n */\nint avfilter_config_links(AVFilterContext *filter);\n\n#define AVFILTER_CMD_FLAG_ONE   1 ///< Stop once a filter understood the command (for target=all for example), fast filters are favored automatically\n#define AVFILTER_CMD_FLAG_FAST  2 ///< Only execute command when its fast (like a video out that supports contrast adjustment in hw)\n\n/**\n * Make the filter instance process a command.\n * It is recommended to use avfilter_graph_send_command().\n */\nint avfilter_process_command(AVFilterContext *filter, const char *cmd, const char *arg, char *res, int res_len, int flags);\n\n/**\n * Iterate over all registered filters.\n *\n * @param opaque a pointer where libavfilter will store the iteration state. Must\n *               point to NULL to start the iteration.\n *\n * @return the next registered filter or NULL when the iteration is\n *         finished\n */\nconst AVFilter *av_filter_iterate(void **opaque);\n\n#if FF_API_NEXT\n/** Initialize the filter system. Register all builtin filters. */\nattribute_deprecated\nvoid avfilter_register_all(void);\n\n/**\n * Register a filter. This is only needed if you plan to use\n * avfilter_get_by_name later to lookup the AVFilter structure by name. A\n * filter can still by instantiated with avfilter_graph_alloc_filter even if it\n * is not registered.\n *\n * @param filter the filter to register\n * @return 0 if the registration was successful, a negative value\n * otherwise\n */\nattribute_deprecated\nint avfilter_register(AVFilter *filter);\n\n/**\n * Iterate over all registered filters.\n * @return If prev is non-NULL, next registered filter after prev or NULL if\n * prev is the last filter. If prev is NULL, return the first registered filter.\n */\nattribute_deprecated\nconst AVFilter *avfilter_next(const AVFilter *prev);\n#endif\n\n/**\n * Get a filter definition matching the given name.\n *\n * @param name the filter name to find\n * @return     the filter definition, if any matching one is registered.\n *             NULL if none found.\n */\nconst AVFilter *avfilter_get_by_name(const char *name);\n\n\n/**\n * Initialize a filter with the supplied parameters.\n *\n * @param ctx  uninitialized filter context to initialize\n * @param args Options to initialize the filter with. This must be a\n *             ':'-separated list of options in the 'key=value' form.\n *             May be NULL if the options have been set directly using the\n *             AVOptions API or there are no options that need to be set.\n * @return 0 on success, a negative AVERROR on failure\n */\nint avfilter_init_str(AVFilterContext *ctx, const char *args);\n\n/**\n * Initialize a filter with the supplied dictionary of options.\n *\n * @param ctx     uninitialized filter context to initialize\n * @param options An AVDictionary filled with options for this filter. On\n *                return this parameter will be destroyed and replaced with\n *                a dict containing options that were not found. This dictionary\n *                must be freed by the caller.\n *                May be NULL, then this function is equivalent to\n *                avfilter_init_str() with the second parameter set to NULL.\n * @return 0 on success, a negative AVERROR on failure\n *\n * @note This function and avfilter_init_str() do essentially the same thing,\n * the difference is in manner in which the options are passed. It is up to the\n * calling code to choose whichever is more preferable. The two functions also\n * behave differently when some of the provided options are not declared as\n * supported by the filter. In such a case, avfilter_init_str() will fail, but\n * this function will leave those extra options in the options AVDictionary and\n * continue as usual.\n */\nint avfilter_init_dict(AVFilterContext *ctx, AVDictionary **options);\n\n/**\n * Free a filter context. This will also remove the filter from its\n * filtergraph's list of filters.\n *\n * @param filter the filter to free\n */\nvoid avfilter_free(AVFilterContext *filter);\n\n/**\n * Insert a filter in the middle of an existing link.\n *\n * @param link the link into which the filter should be inserted\n * @param filt the filter to be inserted\n * @param filt_srcpad_idx the input pad on the filter to connect\n * @param filt_dstpad_idx the output pad on the filter to connect\n * @return     zero on success\n */\nint avfilter_insert_filter(AVFilterLink *link, AVFilterContext *filt,\n                           unsigned filt_srcpad_idx, unsigned filt_dstpad_idx);\n\n/**\n * @return AVClass for AVFilterContext.\n *\n * @see av_opt_find().\n */\nconst AVClass *avfilter_get_class(void);\n\ntypedef struct AVFilterGraphInternal AVFilterGraphInternal;\n\n/**\n * A function pointer passed to the @ref AVFilterGraph.execute callback to be\n * executed multiple times, possibly in parallel.\n *\n * @param ctx the filter context the job belongs to\n * @param arg an opaque parameter passed through from @ref\n *            AVFilterGraph.execute\n * @param jobnr the index of the job being executed\n * @param nb_jobs the total number of jobs\n *\n * @return 0 on success, a negative AVERROR on error\n */\ntypedef int (avfilter_action_func)(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs);\n\n/**\n * A function executing multiple jobs, possibly in parallel.\n *\n * @param ctx the filter context to which the jobs belong\n * @param func the function to be called multiple times\n * @param arg the argument to be passed to func\n * @param ret a nb_jobs-sized array to be filled with return values from each\n *            invocation of func\n * @param nb_jobs the number of jobs to execute\n *\n * @return 0 on success, a negative AVERROR on error\n */\ntypedef int (avfilter_execute_func)(AVFilterContext *ctx, avfilter_action_func *func,\n                                    void *arg, int *ret, int nb_jobs);\n\ntypedef struct AVFilterGraph {\n    const AVClass *av_class;\n    AVFilterContext **filters;\n    unsigned nb_filters;\n\n    char *scale_sws_opts; ///< sws options to use for the auto-inserted scale filters\n#if FF_API_LAVR_OPTS\n    attribute_deprecated char *resample_lavr_opts;   ///< libavresample options to use for the auto-inserted resample filters\n#endif\n\n    /**\n     * Type of multithreading allowed for filters in this graph. A combination\n     * of AVFILTER_THREAD_* flags.\n     *\n     * May be set by the caller at any point, the setting will apply to all\n     * filters initialized after that. The default is allowing everything.\n     *\n     * When a filter in this graph is initialized, this field is combined using\n     * bit AND with AVFilterContext.thread_type to get the final mask used for\n     * determining allowed threading types. I.e. a threading type needs to be\n     * set in both to be allowed.\n     */\n    int thread_type;\n\n    /**\n     * Maximum number of threads used by filters in this graph. May be set by\n     * the caller before adding any filters to the filtergraph. Zero (the\n     * default) means that the number of threads is determined automatically.\n     */\n    int nb_threads;\n\n    /**\n     * Opaque object for libavfilter internal use.\n     */\n    AVFilterGraphInternal *internal;\n\n    /**\n     * Opaque user data. May be set by the caller to an arbitrary value, e.g. to\n     * be used from callbacks like @ref AVFilterGraph.execute.\n     * Libavfilter will not touch this field in any way.\n     */\n    void *opaque;\n\n    /**\n     * This callback may be set by the caller immediately after allocating the\n     * graph and before adding any filters to it, to provide a custom\n     * multithreading implementation.\n     *\n     * If set, filters with slice threading capability will call this callback\n     * to execute multiple jobs in parallel.\n     *\n     * If this field is left unset, libavfilter will use its internal\n     * implementation, which may or may not be multithreaded depending on the\n     * platform and build options.\n     */\n    avfilter_execute_func *execute;\n\n    char *aresample_swr_opts; ///< swr options to use for the auto-inserted aresample filters, Access ONLY through AVOptions\n\n    /**\n     * Private fields\n     *\n     * The following fields are for internal use only.\n     * Their type, offset, number and semantic can change without notice.\n     */\n\n    AVFilterLink **sink_links;\n    int sink_links_count;\n\n    unsigned disable_auto_convert;\n} AVFilterGraph;\n\n/**\n * Allocate a filter graph.\n *\n * @return the allocated filter graph on success or NULL.\n */\nAVFilterGraph *avfilter_graph_alloc(void);\n\n/**\n * Create a new filter instance in a filter graph.\n *\n * @param graph graph in which the new filter will be used\n * @param filter the filter to create an instance of\n * @param name Name to give to the new instance (will be copied to\n *             AVFilterContext.name). This may be used by the caller to identify\n *             different filters, libavfilter itself assigns no semantics to\n *             this parameter. May be NULL.\n *\n * @return the context of the newly created filter instance (note that it is\n *         also retrievable directly through AVFilterGraph.filters or with\n *         avfilter_graph_get_filter()) on success or NULL on failure.\n */\nAVFilterContext *avfilter_graph_alloc_filter(AVFilterGraph *graph,\n                                             const AVFilter *filter,\n                                             const char *name);\n\n/**\n * Get a filter instance identified by instance name from graph.\n *\n * @param graph filter graph to search through.\n * @param name filter instance name (should be unique in the graph).\n * @return the pointer to the found filter instance or NULL if it\n * cannot be found.\n */\nAVFilterContext *avfilter_graph_get_filter(AVFilterGraph *graph, const char *name);\n\n/**\n * Create and add a filter instance into an existing graph.\n * The filter instance is created from the filter filt and inited\n * with the parameters args and opaque.\n *\n * In case of success put in *filt_ctx the pointer to the created\n * filter instance, otherwise set *filt_ctx to NULL.\n *\n * @param name the instance name to give to the created filter instance\n * @param graph_ctx the filter graph\n * @return a negative AVERROR error code in case of failure, a non\n * negative value otherwise\n */\nint avfilter_graph_create_filter(AVFilterContext **filt_ctx, const AVFilter *filt,\n                                 const char *name, const char *args, void *opaque,\n                                 AVFilterGraph *graph_ctx);\n\n/**\n * Enable or disable automatic format conversion inside the graph.\n *\n * Note that format conversion can still happen inside explicitly inserted\n * scale and aresample filters.\n *\n * @param flags  any of the AVFILTER_AUTO_CONVERT_* constants\n */\nvoid avfilter_graph_set_auto_convert(AVFilterGraph *graph, unsigned flags);\n\nenum {\n    AVFILTER_AUTO_CONVERT_ALL  =  0, /**< all automatic conversions enabled */\n    AVFILTER_AUTO_CONVERT_NONE = -1, /**< all automatic conversions disabled */\n};\n\n/**\n * Check validity and configure all the links and formats in the graph.\n *\n * @param graphctx the filter graph\n * @param log_ctx context used for logging\n * @return >= 0 in case of success, a negative AVERROR code otherwise\n */\nint avfilter_graph_config(AVFilterGraph *graphctx, void *log_ctx);\n\n/**\n * Free a graph, destroy its links, and set *graph to NULL.\n * If *graph is NULL, do nothing.\n */\nvoid avfilter_graph_free(AVFilterGraph **graph);\n\n/**\n * A linked-list of the inputs/outputs of the filter chain.\n *\n * This is mainly useful for avfilter_graph_parse() / avfilter_graph_parse2(),\n * where it is used to communicate open (unlinked) inputs and outputs from and\n * to the caller.\n * This struct specifies, per each not connected pad contained in the graph, the\n * filter context and the pad index required for establishing a link.\n */\ntypedef struct AVFilterInOut {\n    /** unique name for this input/output in the list */\n    char *name;\n\n    /** filter context associated to this input/output */\n    AVFilterContext *filter_ctx;\n\n    /** index of the filt_ctx pad to use for linking */\n    int pad_idx;\n\n    /** next input/input in the list, NULL if this is the last */\n    struct AVFilterInOut *next;\n} AVFilterInOut;\n\n/**\n * Allocate a single AVFilterInOut entry.\n * Must be freed with avfilter_inout_free().\n * @return allocated AVFilterInOut on success, NULL on failure.\n */\nAVFilterInOut *avfilter_inout_alloc(void);\n\n/**\n * Free the supplied list of AVFilterInOut and set *inout to NULL.\n * If *inout is NULL, do nothing.\n */\nvoid avfilter_inout_free(AVFilterInOut **inout);\n\n/**\n * Add a graph described by a string to a graph.\n *\n * @note The caller must provide the lists of inputs and outputs,\n * which therefore must be known before calling the function.\n *\n * @note The inputs parameter describes inputs of the already existing\n * part of the graph; i.e. from the point of view of the newly created\n * part, they are outputs. Similarly the outputs parameter describes\n * outputs of the already existing filters, which are provided as\n * inputs to the parsed filters.\n *\n * @param graph   the filter graph where to link the parsed graph context\n * @param filters string to be parsed\n * @param inputs  linked list to the inputs of the graph\n * @param outputs linked list to the outputs of the graph\n * @return zero on success, a negative AVERROR code on error\n */\nint avfilter_graph_parse(AVFilterGraph *graph, const char *filters,\n                         AVFilterInOut *inputs, AVFilterInOut *outputs,\n                         void *log_ctx);\n\n/**\n * Add a graph described by a string to a graph.\n *\n * In the graph filters description, if the input label of the first\n * filter is not specified, \"in\" is assumed; if the output label of\n * the last filter is not specified, \"out\" is assumed.\n *\n * @param graph   the filter graph where to link the parsed graph context\n * @param filters string to be parsed\n * @param inputs  pointer to a linked list to the inputs of the graph, may be NULL.\n *                If non-NULL, *inputs is updated to contain the list of open inputs\n *                after the parsing, should be freed with avfilter_inout_free().\n * @param outputs pointer to a linked list to the outputs of the graph, may be NULL.\n *                If non-NULL, *outputs is updated to contain the list of open outputs\n *                after the parsing, should be freed with avfilter_inout_free().\n * @return non negative on success, a negative AVERROR code on error\n */\nint avfilter_graph_parse_ptr(AVFilterGraph *graph, const char *filters,\n                             AVFilterInOut **inputs, AVFilterInOut **outputs,\n                             void *log_ctx);\n\n/**\n * Add a graph described by a string to a graph.\n *\n * @param[in]  graph   the filter graph where to link the parsed graph context\n * @param[in]  filters string to be parsed\n * @param[out] inputs  a linked list of all free (unlinked) inputs of the\n *                     parsed graph will be returned here. It is to be freed\n *                     by the caller using avfilter_inout_free().\n * @param[out] outputs a linked list of all free (unlinked) outputs of the\n *                     parsed graph will be returned here. It is to be freed by the\n *                     caller using avfilter_inout_free().\n * @return zero on success, a negative AVERROR code on error\n *\n * @note This function returns the inputs and outputs that are left\n * unlinked after parsing the graph and the caller then deals with\n * them.\n * @note This function makes no reference whatsoever to already\n * existing parts of the graph and the inputs parameter will on return\n * contain inputs of the newly parsed part of the graph.  Analogously\n * the outputs parameter will contain outputs of the newly created\n * filters.\n */\nint avfilter_graph_parse2(AVFilterGraph *graph, const char *filters,\n                          AVFilterInOut **inputs,\n                          AVFilterInOut **outputs);\n\n/**\n * Send a command to one or more filter instances.\n *\n * @param graph  the filter graph\n * @param target the filter(s) to which the command should be sent\n *               \"all\" sends to all filters\n *               otherwise it can be a filter or filter instance name\n *               which will send the command to all matching filters.\n * @param cmd    the command to send, for handling simplicity all commands must be alphanumeric only\n * @param arg    the argument for the command\n * @param res    a buffer with size res_size where the filter(s) can return a response.\n *\n * @returns >=0 on success otherwise an error code.\n *              AVERROR(ENOSYS) on unsupported commands\n */\nint avfilter_graph_send_command(AVFilterGraph *graph, const char *target, const char *cmd, const char *arg, char *res, int res_len, int flags);\n\n/**\n * Queue a command for one or more filter instances.\n *\n * @param graph  the filter graph\n * @param target the filter(s) to which the command should be sent\n *               \"all\" sends to all filters\n *               otherwise it can be a filter or filter instance name\n *               which will send the command to all matching filters.\n * @param cmd    the command to sent, for handling simplicity all commands must be alphanumeric only\n * @param arg    the argument for the command\n * @param ts     time at which the command should be sent to the filter\n *\n * @note As this executes commands after this function returns, no return code\n *       from the filter is provided, also AVFILTER_CMD_FLAG_ONE is not supported.\n */\nint avfilter_graph_queue_command(AVFilterGraph *graph, const char *target, const char *cmd, const char *arg, int flags, double ts);\n\n\n/**\n * Dump a graph into a human-readable string representation.\n *\n * @param graph    the graph to dump\n * @param options  formatting options; currently ignored\n * @return  a string, or NULL in case of memory allocation failure;\n *          the string must be freed using av_free\n */\nchar *avfilter_graph_dump(AVFilterGraph *graph, const char *options);\n\n/**\n * Request a frame on the oldest sink link.\n *\n * If the request returns AVERROR_EOF, try the next.\n *\n * Note that this function is not meant to be the sole scheduling mechanism\n * of a filtergraph, only a convenience function to help drain a filtergraph\n * in a balanced way under normal circumstances.\n *\n * Also note that AVERROR_EOF does not mean that frames did not arrive on\n * some of the sinks during the process.\n * When there are multiple sink links, in case the requested link\n * returns an EOF, this may cause a filter to flush pending frames\n * which are sent to another sink link, although unrequested.\n *\n * @return  the return value of ff_request_frame(),\n *          or AVERROR_EOF if all links returned AVERROR_EOF\n */\nint avfilter_graph_request_oldest(AVFilterGraph *graph);\n\n/**\n * @}\n */\n\n#endif /* AVFILTER_AVFILTER_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavfilter/buffersink.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVFILTER_BUFFERSINK_H\n#define AVFILTER_BUFFERSINK_H\n\n/**\n * @file\n * @ingroup lavfi_buffersink\n * memory buffer sink API for audio and video\n */\n\n#include \"avfilter.h\"\n\n/**\n * @defgroup lavfi_buffersink Buffer sink API\n * @ingroup lavfi\n * @{\n */\n\n/**\n * Get a frame with filtered data from sink and put it in frame.\n *\n * @param ctx    pointer to a buffersink or abuffersink filter context.\n * @param frame  pointer to an allocated frame that will be filled with data.\n *               The data must be freed using av_frame_unref() / av_frame_free()\n * @param flags  a combination of AV_BUFFERSINK_FLAG_* flags\n *\n * @return  >= 0 in for success, a negative AVERROR code for failure.\n */\nint av_buffersink_get_frame_flags(AVFilterContext *ctx, AVFrame *frame, int flags);\n\n/**\n * Tell av_buffersink_get_buffer_ref() to read video/samples buffer\n * reference, but not remove it from the buffer. This is useful if you\n * need only to read a video/samples buffer, without to fetch it.\n */\n#define AV_BUFFERSINK_FLAG_PEEK 1\n\n/**\n * Tell av_buffersink_get_buffer_ref() not to request a frame from its input.\n * If a frame is already buffered, it is read (and removed from the buffer),\n * but if no frame is present, return AVERROR(EAGAIN).\n */\n#define AV_BUFFERSINK_FLAG_NO_REQUEST 2\n\n/**\n * Struct to use for initializing a buffersink context.\n */\ntypedef struct AVBufferSinkParams {\n    const enum AVPixelFormat *pixel_fmts; ///< list of allowed pixel formats, terminated by AV_PIX_FMT_NONE\n} AVBufferSinkParams;\n\n/**\n * Create an AVBufferSinkParams structure.\n *\n * Must be freed with av_free().\n */\nAVBufferSinkParams *av_buffersink_params_alloc(void);\n\n/**\n * Struct to use for initializing an abuffersink context.\n */\ntypedef struct AVABufferSinkParams {\n    const enum AVSampleFormat *sample_fmts; ///< list of allowed sample formats, terminated by AV_SAMPLE_FMT_NONE\n    const int64_t *channel_layouts;         ///< list of allowed channel layouts, terminated by -1\n    const int *channel_counts;              ///< list of allowed channel counts, terminated by -1\n    int all_channel_counts;                 ///< if not 0, accept any channel count or layout\n    int *sample_rates;                      ///< list of allowed sample rates, terminated by -1\n} AVABufferSinkParams;\n\n/**\n * Create an AVABufferSinkParams structure.\n *\n * Must be freed with av_free().\n */\nAVABufferSinkParams *av_abuffersink_params_alloc(void);\n\n/**\n * Set the frame size for an audio buffer sink.\n *\n * All calls to av_buffersink_get_buffer_ref will return a buffer with\n * exactly the specified number of samples, or AVERROR(EAGAIN) if there is\n * not enough. The last buffer at EOF will be padded with 0.\n */\nvoid av_buffersink_set_frame_size(AVFilterContext *ctx, unsigned frame_size);\n\n/**\n * @defgroup lavfi_buffersink_accessors Buffer sink accessors\n * Get the properties of the stream\n * @{\n */\n\nenum AVMediaType av_buffersink_get_type                (const AVFilterContext *ctx);\nAVRational       av_buffersink_get_time_base           (const AVFilterContext *ctx);\nint              av_buffersink_get_format              (const AVFilterContext *ctx);\n\nAVRational       av_buffersink_get_frame_rate          (const AVFilterContext *ctx);\nint              av_buffersink_get_w                   (const AVFilterContext *ctx);\nint              av_buffersink_get_h                   (const AVFilterContext *ctx);\nAVRational       av_buffersink_get_sample_aspect_ratio (const AVFilterContext *ctx);\n\nint              av_buffersink_get_channels            (const AVFilterContext *ctx);\nuint64_t         av_buffersink_get_channel_layout      (const AVFilterContext *ctx);\nint              av_buffersink_get_sample_rate         (const AVFilterContext *ctx);\n\nAVBufferRef *    av_buffersink_get_hw_frames_ctx       (const AVFilterContext *ctx);\n\n/** @} */\n\n/**\n * Get a frame with filtered data from sink and put it in frame.\n *\n * @param ctx pointer to a context of a buffersink or abuffersink AVFilter.\n * @param frame pointer to an allocated frame that will be filled with data.\n *              The data must be freed using av_frame_unref() / av_frame_free()\n *\n * @return\n *         - >= 0 if a frame was successfully returned.\n *         - AVERROR(EAGAIN) if no frames are available at this point; more\n *           input frames must be added to the filtergraph to get more output.\n *         - AVERROR_EOF if there will be no more output frames on this sink.\n *         - A different negative AVERROR code in other failure cases.\n */\nint av_buffersink_get_frame(AVFilterContext *ctx, AVFrame *frame);\n\n/**\n * Same as av_buffersink_get_frame(), but with the ability to specify the number\n * of samples read. This function is less efficient than\n * av_buffersink_get_frame(), because it copies the data around.\n *\n * @param ctx pointer to a context of the abuffersink AVFilter.\n * @param frame pointer to an allocated frame that will be filled with data.\n *              The data must be freed using av_frame_unref() / av_frame_free()\n *              frame will contain exactly nb_samples audio samples, except at\n *              the end of stream, when it can contain less than nb_samples.\n *\n * @return The return codes have the same meaning as for\n *         av_buffersink_get_samples().\n *\n * @warning do not mix this function with av_buffersink_get_frame(). Use only one or\n * the other with a single sink, not both.\n */\nint av_buffersink_get_samples(AVFilterContext *ctx, AVFrame *frame, int nb_samples);\n\n/**\n * @}\n */\n\n#endif /* AVFILTER_BUFFERSINK_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavfilter/buffersrc.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVFILTER_BUFFERSRC_H\n#define AVFILTER_BUFFERSRC_H\n\n/**\n * @file\n * @ingroup lavfi_buffersrc\n * Memory buffer source API.\n */\n\n#include \"avfilter.h\"\n\n/**\n * @defgroup lavfi_buffersrc Buffer source API\n * @ingroup lavfi\n * @{\n */\n\nenum {\n\n    /**\n     * Do not check for format changes.\n     */\n    AV_BUFFERSRC_FLAG_NO_CHECK_FORMAT = 1,\n\n    /**\n     * Immediately push the frame to the output.\n     */\n    AV_BUFFERSRC_FLAG_PUSH = 4,\n\n    /**\n     * Keep a reference to the frame.\n     * If the frame if reference-counted, create a new reference; otherwise\n     * copy the frame data.\n     */\n    AV_BUFFERSRC_FLAG_KEEP_REF = 8,\n\n};\n\n/**\n * Get the number of failed requests.\n *\n * A failed request is when the request_frame method is called while no\n * frame is present in the buffer.\n * The number is reset when a frame is added.\n */\nunsigned av_buffersrc_get_nb_failed_requests(AVFilterContext *buffer_src);\n\n/**\n * This structure contains the parameters describing the frames that will be\n * passed to this filter.\n *\n * It should be allocated with av_buffersrc_parameters_alloc() and freed with\n * av_free(). All the allocated fields in it remain owned by the caller.\n */\ntypedef struct AVBufferSrcParameters {\n    /**\n     * video: the pixel format, value corresponds to enum AVPixelFormat\n     * audio: the sample format, value corresponds to enum AVSampleFormat\n     */\n    int format;\n    /**\n     * The timebase to be used for the timestamps on the input frames.\n     */\n    AVRational time_base;\n\n    /**\n     * Video only, the display dimensions of the input frames.\n     */\n    int width, height;\n\n    /**\n     * Video only, the sample (pixel) aspect ratio.\n     */\n    AVRational sample_aspect_ratio;\n\n    /**\n     * Video only, the frame rate of the input video. This field must only be\n     * set to a non-zero value if input stream has a known constant framerate\n     * and should be left at its initial value if the framerate is variable or\n     * unknown.\n     */\n    AVRational frame_rate;\n\n    /**\n     * Video with a hwaccel pixel format only. This should be a reference to an\n     * AVHWFramesContext instance describing the input frames.\n     */\n    AVBufferRef *hw_frames_ctx;\n\n    /**\n     * Audio only, the audio sampling rate in samples per secon.\n     */\n    int sample_rate;\n\n    /**\n     * Audio only, the audio channel layout\n     */\n    uint64_t channel_layout;\n} AVBufferSrcParameters;\n\n/**\n * Allocate a new AVBufferSrcParameters instance. It should be freed by the\n * caller with av_free().\n */\nAVBufferSrcParameters *av_buffersrc_parameters_alloc(void);\n\n/**\n * Initialize the buffersrc or abuffersrc filter with the provided parameters.\n * This function may be called multiple times, the later calls override the\n * previous ones. Some of the parameters may also be set through AVOptions, then\n * whatever method is used last takes precedence.\n *\n * @param ctx an instance of the buffersrc or abuffersrc filter\n * @param param the stream parameters. The frames later passed to this filter\n *              must conform to those parameters. All the allocated fields in\n *              param remain owned by the caller, libavfilter will make internal\n *              copies or references when necessary.\n * @return 0 on success, a negative AVERROR code on failure.\n */\nint av_buffersrc_parameters_set(AVFilterContext *ctx, AVBufferSrcParameters *param);\n\n/**\n * Add a frame to the buffer source.\n *\n * @param ctx   an instance of the buffersrc filter\n * @param frame frame to be added. If the frame is reference counted, this\n * function will make a new reference to it. Otherwise the frame data will be\n * copied.\n *\n * @return 0 on success, a negative AVERROR on error\n *\n * This function is equivalent to av_buffersrc_add_frame_flags() with the\n * AV_BUFFERSRC_FLAG_KEEP_REF flag.\n */\nav_warn_unused_result\nint av_buffersrc_write_frame(AVFilterContext *ctx, const AVFrame *frame);\n\n/**\n * Add a frame to the buffer source.\n *\n * @param ctx   an instance of the buffersrc filter\n * @param frame frame to be added. If the frame is reference counted, this\n * function will take ownership of the reference(s) and reset the frame.\n * Otherwise the frame data will be copied. If this function returns an error,\n * the input frame is not touched.\n *\n * @return 0 on success, a negative AVERROR on error.\n *\n * @note the difference between this function and av_buffersrc_write_frame() is\n * that av_buffersrc_write_frame() creates a new reference to the input frame,\n * while this function takes ownership of the reference passed to it.\n *\n * This function is equivalent to av_buffersrc_add_frame_flags() without the\n * AV_BUFFERSRC_FLAG_KEEP_REF flag.\n */\nav_warn_unused_result\nint av_buffersrc_add_frame(AVFilterContext *ctx, AVFrame *frame);\n\n/**\n * Add a frame to the buffer source.\n *\n * By default, if the frame is reference-counted, this function will take\n * ownership of the reference(s) and reset the frame. This can be controlled\n * using the flags.\n *\n * If this function returns an error, the input frame is not touched.\n *\n * @param buffer_src  pointer to a buffer source context\n * @param frame       a frame, or NULL to mark EOF\n * @param flags       a combination of AV_BUFFERSRC_FLAG_*\n * @return            >= 0 in case of success, a negative AVERROR code\n *                    in case of failure\n */\nav_warn_unused_result\nint av_buffersrc_add_frame_flags(AVFilterContext *buffer_src,\n                                 AVFrame *frame, int flags);\n\n/**\n * Close the buffer source after EOF.\n *\n * This is similar to passing NULL to av_buffersrc_add_frame_flags()\n * except it takes the timestamp of the EOF, i.e. the timestamp of the end\n * of the last frame.\n */\nint av_buffersrc_close(AVFilterContext *ctx, int64_t pts, unsigned flags);\n\n/**\n * @}\n */\n\n#endif /* AVFILTER_BUFFERSRC_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavfilter/version.h",
    "content": "/*\n * Version macros.\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVFILTER_VERSION_H\n#define AVFILTER_VERSION_H\n\n/**\n * @file\n * @ingroup lavfi\n * Libavfilter version macros\n */\n\n#include \"libavutil/version.h\"\n\n#define LIBAVFILTER_VERSION_MAJOR   7\n#define LIBAVFILTER_VERSION_MINOR  16\n#define LIBAVFILTER_VERSION_MICRO 100\n\n#define LIBAVFILTER_VERSION_INT AV_VERSION_INT(LIBAVFILTER_VERSION_MAJOR, \\\n                                               LIBAVFILTER_VERSION_MINOR, \\\n                                               LIBAVFILTER_VERSION_MICRO)\n#define LIBAVFILTER_VERSION     AV_VERSION(LIBAVFILTER_VERSION_MAJOR,   \\\n                                           LIBAVFILTER_VERSION_MINOR,   \\\n                                           LIBAVFILTER_VERSION_MICRO)\n#define LIBAVFILTER_BUILD       LIBAVFILTER_VERSION_INT\n\n#define LIBAVFILTER_IDENT       \"Lavfi\" AV_STRINGIFY(LIBAVFILTER_VERSION)\n\n/**\n * FF_API_* defines may be placed below to indicate public API that will be\n * dropped at a future version bump. The defines themselves are not part of\n * the public API and may change, break or disappear at any time.\n */\n\n#ifndef FF_API_OLD_FILTER_OPTS_ERROR\n#define FF_API_OLD_FILTER_OPTS_ERROR        (LIBAVFILTER_VERSION_MAJOR < 8)\n#endif\n#ifndef FF_API_LAVR_OPTS\n#define FF_API_LAVR_OPTS                    (LIBAVFILTER_VERSION_MAJOR < 8)\n#endif\n#ifndef FF_API_FILTER_GET_SET\n#define FF_API_FILTER_GET_SET               (LIBAVFILTER_VERSION_MAJOR < 8)\n#endif\n#ifndef FF_API_NEXT\n#define FF_API_NEXT                         (LIBAVFILTER_VERSION_MAJOR < 8)\n#endif\n\n#endif /* AVFILTER_VERSION_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavformat/avformat.h",
    "content": "/*\n * copyright (c) 2001 Fabrice Bellard\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVFORMAT_AVFORMAT_H\n#define AVFORMAT_AVFORMAT_H\n\n/**\n * @file\n * @ingroup libavf\n * Main libavformat public API header\n */\n\n/**\n * @defgroup libavf libavformat\n * I/O and Muxing/Demuxing Library\n *\n * Libavformat (lavf) is a library for dealing with various media container\n * formats. Its main two purposes are demuxing - i.e. splitting a media file\n * into component streams, and the reverse process of muxing - writing supplied\n * data in a specified container format. It also has an @ref lavf_io\n * \"I/O module\" which supports a number of protocols for accessing the data (e.g.\n * file, tcp, http and others). Before using lavf, you need to call\n * av_register_all() to register all compiled muxers, demuxers and protocols.\n * Unless you are absolutely sure you won't use libavformat's network\n * capabilities, you should also call avformat_network_init().\n *\n * A supported input format is described by an AVInputFormat struct, conversely\n * an output format is described by AVOutputFormat. You can iterate over all\n * registered input/output formats using the av_iformat_next() /\n * av_oformat_next() functions. The protocols layer is not part of the public\n * API, so you can only get the names of supported protocols with the\n * avio_enum_protocols() function.\n *\n * Main lavf structure used for both muxing and demuxing is AVFormatContext,\n * which exports all information about the file being read or written. As with\n * most Libavformat structures, its size is not part of public ABI, so it cannot be\n * allocated on stack or directly with av_malloc(). To create an\n * AVFormatContext, use avformat_alloc_context() (some functions, like\n * avformat_open_input() might do that for you).\n *\n * Most importantly an AVFormatContext contains:\n * @li the @ref AVFormatContext.iformat \"input\" or @ref AVFormatContext.oformat\n * \"output\" format. It is either autodetected or set by user for input;\n * always set by user for output.\n * @li an @ref AVFormatContext.streams \"array\" of AVStreams, which describe all\n * elementary streams stored in the file. AVStreams are typically referred to\n * using their index in this array.\n * @li an @ref AVFormatContext.pb \"I/O context\". It is either opened by lavf or\n * set by user for input, always set by user for output (unless you are dealing\n * with an AVFMT_NOFILE format).\n *\n * @section lavf_options Passing options to (de)muxers\n * It is possible to configure lavf muxers and demuxers using the @ref avoptions\n * mechanism. Generic (format-independent) libavformat options are provided by\n * AVFormatContext, they can be examined from a user program by calling\n * av_opt_next() / av_opt_find() on an allocated AVFormatContext (or its AVClass\n * from avformat_get_class()). Private (format-specific) options are provided by\n * AVFormatContext.priv_data if and only if AVInputFormat.priv_class /\n * AVOutputFormat.priv_class of the corresponding format struct is non-NULL.\n * Further options may be provided by the @ref AVFormatContext.pb \"I/O context\",\n * if its AVClass is non-NULL, and the protocols layer. See the discussion on\n * nesting in @ref avoptions documentation to learn how to access those.\n *\n * @section urls\n * URL strings in libavformat are made of a scheme/protocol, a ':', and a\n * scheme specific string. URLs without a scheme and ':' used for local files\n * are supported but deprecated. \"file:\" should be used for local files.\n *\n * It is important that the scheme string is not taken from untrusted\n * sources without checks.\n *\n * Note that some schemes/protocols are quite powerful, allowing access to\n * both local and remote files, parts of them, concatenations of them, local\n * audio and video devices and so on.\n *\n * @{\n *\n * @defgroup lavf_decoding Demuxing\n * @{\n * Demuxers read a media file and split it into chunks of data (@em packets). A\n * @ref AVPacket \"packet\" contains one or more encoded frames which belongs to a\n * single elementary stream. In the lavf API this process is represented by the\n * avformat_open_input() function for opening a file, av_read_frame() for\n * reading a single packet and finally avformat_close_input(), which does the\n * cleanup.\n *\n * @section lavf_decoding_open Opening a media file\n * The minimum information required to open a file is its URL, which\n * is passed to avformat_open_input(), as in the following code:\n * @code\n * const char    *url = \"file:in.mp3\";\n * AVFormatContext *s = NULL;\n * int ret = avformat_open_input(&s, url, NULL, NULL);\n * if (ret < 0)\n *     abort();\n * @endcode\n * The above code attempts to allocate an AVFormatContext, open the\n * specified file (autodetecting the format) and read the header, exporting the\n * information stored there into s. Some formats do not have a header or do not\n * store enough information there, so it is recommended that you call the\n * avformat_find_stream_info() function which tries to read and decode a few\n * frames to find missing information.\n *\n * In some cases you might want to preallocate an AVFormatContext yourself with\n * avformat_alloc_context() and do some tweaking on it before passing it to\n * avformat_open_input(). One such case is when you want to use custom functions\n * for reading input data instead of lavf internal I/O layer.\n * To do that, create your own AVIOContext with avio_alloc_context(), passing\n * your reading callbacks to it. Then set the @em pb field of your\n * AVFormatContext to newly created AVIOContext.\n *\n * Since the format of the opened file is in general not known until after\n * avformat_open_input() has returned, it is not possible to set demuxer private\n * options on a preallocated context. Instead, the options should be passed to\n * avformat_open_input() wrapped in an AVDictionary:\n * @code\n * AVDictionary *options = NULL;\n * av_dict_set(&options, \"video_size\", \"640x480\", 0);\n * av_dict_set(&options, \"pixel_format\", \"rgb24\", 0);\n *\n * if (avformat_open_input(&s, url, NULL, &options) < 0)\n *     abort();\n * av_dict_free(&options);\n * @endcode\n * This code passes the private options 'video_size' and 'pixel_format' to the\n * demuxer. They would be necessary for e.g. the rawvideo demuxer, since it\n * cannot know how to interpret raw video data otherwise. If the format turns\n * out to be something different than raw video, those options will not be\n * recognized by the demuxer and therefore will not be applied. Such unrecognized\n * options are then returned in the options dictionary (recognized options are\n * consumed). The calling program can handle such unrecognized options as it\n * wishes, e.g.\n * @code\n * AVDictionaryEntry *e;\n * if (e = av_dict_get(options, \"\", NULL, AV_DICT_IGNORE_SUFFIX)) {\n *     fprintf(stderr, \"Option %s not recognized by the demuxer.\\n\", e->key);\n *     abort();\n * }\n * @endcode\n *\n * After you have finished reading the file, you must close it with\n * avformat_close_input(). It will free everything associated with the file.\n *\n * @section lavf_decoding_read Reading from an opened file\n * Reading data from an opened AVFormatContext is done by repeatedly calling\n * av_read_frame() on it. Each call, if successful, will return an AVPacket\n * containing encoded data for one AVStream, identified by\n * AVPacket.stream_index. This packet may be passed straight into the libavcodec\n * decoding functions avcodec_send_packet() or avcodec_decode_subtitle2() if the\n * caller wishes to decode the data.\n *\n * AVPacket.pts, AVPacket.dts and AVPacket.duration timing information will be\n * set if known. They may also be unset (i.e. AV_NOPTS_VALUE for\n * pts/dts, 0 for duration) if the stream does not provide them. The timing\n * information will be in AVStream.time_base units, i.e. it has to be\n * multiplied by the timebase to convert them to seconds.\n *\n * If AVPacket.buf is set on the returned packet, then the packet is\n * allocated dynamically and the user may keep it indefinitely.\n * Otherwise, if AVPacket.buf is NULL, the packet data is backed by a\n * static storage somewhere inside the demuxer and the packet is only valid\n * until the next av_read_frame() call or closing the file. If the caller\n * requires a longer lifetime, av_dup_packet() will make an av_malloc()ed copy\n * of it.\n * In both cases, the packet must be freed with av_packet_unref() when it is no\n * longer needed.\n *\n * @section lavf_decoding_seek Seeking\n * @}\n *\n * @defgroup lavf_encoding Muxing\n * @{\n * Muxers take encoded data in the form of @ref AVPacket \"AVPackets\" and write\n * it into files or other output bytestreams in the specified container format.\n *\n * The main API functions for muxing are avformat_write_header() for writing the\n * file header, av_write_frame() / av_interleaved_write_frame() for writing the\n * packets and av_write_trailer() for finalizing the file.\n *\n * At the beginning of the muxing process, the caller must first call\n * avformat_alloc_context() to create a muxing context. The caller then sets up\n * the muxer by filling the various fields in this context:\n *\n * - The @ref AVFormatContext.oformat \"oformat\" field must be set to select the\n *   muxer that will be used.\n * - Unless the format is of the AVFMT_NOFILE type, the @ref AVFormatContext.pb\n *   \"pb\" field must be set to an opened IO context, either returned from\n *   avio_open2() or a custom one.\n * - Unless the format is of the AVFMT_NOSTREAMS type, at least one stream must\n *   be created with the avformat_new_stream() function. The caller should fill\n *   the @ref AVStream.codecpar \"stream codec parameters\" information, such as the\n *   codec @ref AVCodecParameters.codec_type \"type\", @ref AVCodecParameters.codec_id\n *   \"id\" and other parameters (e.g. width / height, the pixel or sample format,\n *   etc.) as known. The @ref AVStream.time_base \"stream timebase\" should\n *   be set to the timebase that the caller desires to use for this stream (note\n *   that the timebase actually used by the muxer can be different, as will be\n *   described later).\n * - It is advised to manually initialize only the relevant fields in\n *   AVCodecParameters, rather than using @ref avcodec_parameters_copy() during\n *   remuxing: there is no guarantee that the codec context values remain valid\n *   for both input and output format contexts.\n * - The caller may fill in additional information, such as @ref\n *   AVFormatContext.metadata \"global\" or @ref AVStream.metadata \"per-stream\"\n *   metadata, @ref AVFormatContext.chapters \"chapters\", @ref\n *   AVFormatContext.programs \"programs\", etc. as described in the\n *   AVFormatContext documentation. Whether such information will actually be\n *   stored in the output depends on what the container format and the muxer\n *   support.\n *\n * When the muxing context is fully set up, the caller must call\n * avformat_write_header() to initialize the muxer internals and write the file\n * header. Whether anything actually is written to the IO context at this step\n * depends on the muxer, but this function must always be called. Any muxer\n * private options must be passed in the options parameter to this function.\n *\n * The data is then sent to the muxer by repeatedly calling av_write_frame() or\n * av_interleaved_write_frame() (consult those functions' documentation for\n * discussion on the difference between them; only one of them may be used with\n * a single muxing context, they should not be mixed). Do note that the timing\n * information on the packets sent to the muxer must be in the corresponding\n * AVStream's timebase. That timebase is set by the muxer (in the\n * avformat_write_header() step) and may be different from the timebase\n * requested by the caller.\n *\n * Once all the data has been written, the caller must call av_write_trailer()\n * to flush any buffered packets and finalize the output file, then close the IO\n * context (if any) and finally free the muxing context with\n * avformat_free_context().\n * @}\n *\n * @defgroup lavf_io I/O Read/Write\n * @{\n * @section lavf_io_dirlist Directory listing\n * The directory listing API makes it possible to list files on remote servers.\n *\n * Some of possible use cases:\n * - an \"open file\" dialog to choose files from a remote location,\n * - a recursive media finder providing a player with an ability to play all\n * files from a given directory.\n *\n * @subsection lavf_io_dirlist_open Opening a directory\n * At first, a directory needs to be opened by calling avio_open_dir()\n * supplied with a URL and, optionally, ::AVDictionary containing\n * protocol-specific parameters. The function returns zero or positive\n * integer and allocates AVIODirContext on success.\n *\n * @code\n * AVIODirContext *ctx = NULL;\n * if (avio_open_dir(&ctx, \"smb://example.com/some_dir\", NULL) < 0) {\n *     fprintf(stderr, \"Cannot open directory.\\n\");\n *     abort();\n * }\n * @endcode\n *\n * This code tries to open a sample directory using smb protocol without\n * any additional parameters.\n *\n * @subsection lavf_io_dirlist_read Reading entries\n * Each directory's entry (i.e. file, another directory, anything else\n * within ::AVIODirEntryType) is represented by AVIODirEntry.\n * Reading consecutive entries from an opened AVIODirContext is done by\n * repeatedly calling avio_read_dir() on it. Each call returns zero or\n * positive integer if successful. Reading can be stopped right after the\n * NULL entry has been read -- it means there are no entries left to be\n * read. The following code reads all entries from a directory associated\n * with ctx and prints their names to standard output.\n * @code\n * AVIODirEntry *entry = NULL;\n * for (;;) {\n *     if (avio_read_dir(ctx, &entry) < 0) {\n *         fprintf(stderr, \"Cannot list directory.\\n\");\n *         abort();\n *     }\n *     if (!entry)\n *         break;\n *     printf(\"%s\\n\", entry->name);\n *     avio_free_directory_entry(&entry);\n * }\n * @endcode\n * @}\n *\n * @defgroup lavf_codec Demuxers\n * @{\n * @defgroup lavf_codec_native Native Demuxers\n * @{\n * @}\n * @defgroup lavf_codec_wrappers External library wrappers\n * @{\n * @}\n * @}\n * @defgroup lavf_protos I/O Protocols\n * @{\n * @}\n * @defgroup lavf_internal Internal\n * @{\n * @}\n * @}\n */\n\n#include <time.h>\n#include <stdio.h>  /* FILE */\n#include \"libavcodec/avcodec.h\"\n#include \"libavutil/dict.h\"\n#include \"libavutil/log.h\"\n\n#include \"avio.h\"\n#include \"libavformat/version.h\"\n\nstruct AVFormatContext;\n\nstruct AVDeviceInfoList;\nstruct AVDeviceCapabilitiesQuery;\n\n/**\n * @defgroup metadata_api Public Metadata API\n * @{\n * @ingroup libavf\n * The metadata API allows libavformat to export metadata tags to a client\n * application when demuxing. Conversely it allows a client application to\n * set metadata when muxing.\n *\n * Metadata is exported or set as pairs of key/value strings in the 'metadata'\n * fields of the AVFormatContext, AVStream, AVChapter and AVProgram structs\n * using the @ref lavu_dict \"AVDictionary\" API. Like all strings in FFmpeg,\n * metadata is assumed to be UTF-8 encoded Unicode. Note that metadata\n * exported by demuxers isn't checked to be valid UTF-8 in most cases.\n *\n * Important concepts to keep in mind:\n * -  Keys are unique; there can never be 2 tags with the same key. This is\n *    also meant semantically, i.e., a demuxer should not knowingly produce\n *    several keys that are literally different but semantically identical.\n *    E.g., key=Author5, key=Author6. In this example, all authors must be\n *    placed in the same tag.\n * -  Metadata is flat, not hierarchical; there are no subtags. If you\n *    want to store, e.g., the email address of the child of producer Alice\n *    and actor Bob, that could have key=alice_and_bobs_childs_email_address.\n * -  Several modifiers can be applied to the tag name. This is done by\n *    appending a dash character ('-') and the modifier name in the order\n *    they appear in the list below -- e.g. foo-eng-sort, not foo-sort-eng.\n *    -  language -- a tag whose value is localized for a particular language\n *       is appended with the ISO 639-2/B 3-letter language code.\n *       For example: Author-ger=Michael, Author-eng=Mike\n *       The original/default language is in the unqualified \"Author\" tag.\n *       A demuxer should set a default if it sets any translated tag.\n *    -  sorting  -- a modified version of a tag that should be used for\n *       sorting will have '-sort' appended. E.g. artist=\"The Beatles\",\n *       artist-sort=\"Beatles, The\".\n * - Some protocols and demuxers support metadata updates. After a successful\n *   call to av_read_packet(), AVFormatContext.event_flags or AVStream.event_flags\n *   will be updated to indicate if metadata changed. In order to detect metadata\n *   changes on a stream, you need to loop through all streams in the AVFormatContext\n *   and check their individual event_flags.\n *\n * -  Demuxers attempt to export metadata in a generic format, however tags\n *    with no generic equivalents are left as they are stored in the container.\n *    Follows a list of generic tag names:\n *\n @verbatim\n album        -- name of the set this work belongs to\n album_artist -- main creator of the set/album, if different from artist.\n                 e.g. \"Various Artists\" for compilation albums.\n artist       -- main creator of the work\n comment      -- any additional description of the file.\n composer     -- who composed the work, if different from artist.\n copyright    -- name of copyright holder.\n creation_time-- date when the file was created, preferably in ISO 8601.\n date         -- date when the work was created, preferably in ISO 8601.\n disc         -- number of a subset, e.g. disc in a multi-disc collection.\n encoder      -- name/settings of the software/hardware that produced the file.\n encoded_by   -- person/group who created the file.\n filename     -- original name of the file.\n genre        -- <self-evident>.\n language     -- main language in which the work is performed, preferably\n                 in ISO 639-2 format. Multiple languages can be specified by\n                 separating them with commas.\n performer    -- artist who performed the work, if different from artist.\n                 E.g for \"Also sprach Zarathustra\", artist would be \"Richard\n                 Strauss\" and performer \"London Philharmonic Orchestra\".\n publisher    -- name of the label/publisher.\n service_name     -- name of the service in broadcasting (channel name).\n service_provider -- name of the service provider in broadcasting.\n title        -- name of the work.\n track        -- number of this work in the set, can be in form current/total.\n variant_bitrate -- the total bitrate of the bitrate variant that the current stream is part of\n @endverbatim\n *\n * Look in the examples section for an application example how to use the Metadata API.\n *\n * @}\n */\n\n/* packet functions */\n\n\n/**\n * Allocate and read the payload of a packet and initialize its\n * fields with default values.\n *\n * @param s    associated IO context\n * @param pkt packet\n * @param size desired payload size\n * @return >0 (read size) if OK, AVERROR_xxx otherwise\n */\nint av_get_packet(AVIOContext *s, AVPacket *pkt, int size);\n\n\n/**\n * Read data and append it to the current content of the AVPacket.\n * If pkt->size is 0 this is identical to av_get_packet.\n * Note that this uses av_grow_packet and thus involves a realloc\n * which is inefficient. Thus this function should only be used\n * when there is no reasonable way to know (an upper bound of)\n * the final size.\n *\n * @param s    associated IO context\n * @param pkt packet\n * @param size amount of data to read\n * @return >0 (read size) if OK, AVERROR_xxx otherwise, previous data\n *         will not be lost even if an error occurs.\n */\nint av_append_packet(AVIOContext *s, AVPacket *pkt, int size);\n\n/*************************************************/\n/* input/output formats */\n\nstruct AVCodecTag;\n\n/**\n * This structure contains the data a format has to probe a file.\n */\ntypedef struct AVProbeData {\n    const char *filename;\n    unsigned char *buf; /**< Buffer must have AVPROBE_PADDING_SIZE of extra allocated bytes filled with zero. */\n    int buf_size;       /**< Size of buf except extra allocated bytes */\n    const char *mime_type; /**< mime_type, when known. */\n} AVProbeData;\n\n#define AVPROBE_SCORE_RETRY (AVPROBE_SCORE_MAX/4)\n#define AVPROBE_SCORE_STREAM_RETRY (AVPROBE_SCORE_MAX/4-1)\n\n#define AVPROBE_SCORE_EXTENSION  50 ///< score for file extension\n#define AVPROBE_SCORE_MIME       75 ///< score for file mime type\n#define AVPROBE_SCORE_MAX       100 ///< maximum score\n\n#define AVPROBE_PADDING_SIZE 32             ///< extra allocated bytes at the end of the probe buffer\n\n/// Demuxer will use avio_open, no opened file should be provided by the caller.\n#define AVFMT_NOFILE        0x0001\n#define AVFMT_NEEDNUMBER    0x0002 /**< Needs '%d' in filename. */\n#define AVFMT_SHOW_IDS      0x0008 /**< Show format stream IDs numbers. */\n#define AVFMT_GLOBALHEADER  0x0040 /**< Format wants global header. */\n#define AVFMT_NOTIMESTAMPS  0x0080 /**< Format does not need / have any timestamps. */\n#define AVFMT_GENERIC_INDEX 0x0100 /**< Use generic index building code. */\n#define AVFMT_TS_DISCONT    0x0200 /**< Format allows timestamp discontinuities. Note, muxers always require valid (monotone) timestamps */\n#define AVFMT_VARIABLE_FPS  0x0400 /**< Format allows variable fps. */\n#define AVFMT_NODIMENSIONS  0x0800 /**< Format does not need width/height */\n#define AVFMT_NOSTREAMS     0x1000 /**< Format does not require any streams */\n#define AVFMT_NOBINSEARCH   0x2000 /**< Format does not allow to fall back on binary search via read_timestamp */\n#define AVFMT_NOGENSEARCH   0x4000 /**< Format does not allow to fall back on generic search */\n#define AVFMT_NO_BYTE_SEEK  0x8000 /**< Format does not allow seeking by bytes */\n#define AVFMT_ALLOW_FLUSH  0x10000 /**< Format allows flushing. If not set, the muxer will not receive a NULL packet in the write_packet function. */\n#define AVFMT_TS_NONSTRICT 0x20000 /**< Format does not require strictly\n                                        increasing timestamps, but they must\n                                        still be monotonic */\n#define AVFMT_TS_NEGATIVE  0x40000 /**< Format allows muxing negative\n                                        timestamps. If not set the timestamp\n                                        will be shifted in av_write_frame and\n                                        av_interleaved_write_frame so they\n                                        start from 0.\n                                        The user or muxer can override this through\n                                        AVFormatContext.avoid_negative_ts\n                                        */\n\n#define AVFMT_SEEK_TO_PTS   0x4000000 /**< Seeking is based on PTS */\n\n/**\n * @addtogroup lavf_encoding\n * @{\n */\ntypedef struct AVOutputFormat {\n    const char *name;\n    /**\n     * Descriptive name for the format, meant to be more human-readable\n     * than name. You should use the NULL_IF_CONFIG_SMALL() macro\n     * to define it.\n     */\n    const char *long_name;\n    const char *mime_type;\n    const char *extensions; /**< comma-separated filename extensions */\n    /* output support */\n    enum AVCodecID audio_codec;    /**< default audio codec */\n    enum AVCodecID video_codec;    /**< default video codec */\n    enum AVCodecID subtitle_codec; /**< default subtitle codec */\n    /**\n     * can use flags: AVFMT_NOFILE, AVFMT_NEEDNUMBER,\n     * AVFMT_GLOBALHEADER, AVFMT_NOTIMESTAMPS, AVFMT_VARIABLE_FPS,\n     * AVFMT_NODIMENSIONS, AVFMT_NOSTREAMS, AVFMT_ALLOW_FLUSH,\n     * AVFMT_TS_NONSTRICT, AVFMT_TS_NEGATIVE\n     */\n    int flags;\n\n    /**\n     * List of supported codec_id-codec_tag pairs, ordered by \"better\n     * choice first\". The arrays are all terminated by AV_CODEC_ID_NONE.\n     */\n    const struct AVCodecTag * const *codec_tag;\n\n\n    const AVClass *priv_class; ///< AVClass for the private context\n\n    /*****************************************************************\n     * No fields below this line are part of the public API. They\n     * may not be used outside of libavformat and can be changed and\n     * removed at will.\n     * New public fields should be added right above.\n     *****************************************************************\n     */\n    struct AVOutputFormat *next;\n    /**\n     * size of private data so that it can be allocated in the wrapper\n     */\n    int priv_data_size;\n\n    int (*write_header)(struct AVFormatContext *);\n    /**\n     * Write a packet. If AVFMT_ALLOW_FLUSH is set in flags,\n     * pkt can be NULL in order to flush data buffered in the muxer.\n     * When flushing, return 0 if there still is more data to flush,\n     * or 1 if everything was flushed and there is no more buffered\n     * data.\n     */\n    int (*write_packet)(struct AVFormatContext *, AVPacket *pkt);\n    int (*write_trailer)(struct AVFormatContext *);\n    /**\n     * Currently only used to set pixel format if not YUV420P.\n     */\n    int (*interleave_packet)(struct AVFormatContext *, AVPacket *out,\n                             AVPacket *in, int flush);\n    /**\n     * Test if the given codec can be stored in this container.\n     *\n     * @return 1 if the codec is supported, 0 if it is not.\n     *         A negative number if unknown.\n     *         MKTAG('A', 'P', 'I', 'C') if the codec is only supported as AV_DISPOSITION_ATTACHED_PIC\n     */\n    int (*query_codec)(enum AVCodecID id, int std_compliance);\n\n    void (*get_output_timestamp)(struct AVFormatContext *s, int stream,\n                                 int64_t *dts, int64_t *wall);\n    /**\n     * Allows sending messages from application to device.\n     */\n    int (*control_message)(struct AVFormatContext *s, int type,\n                           void *data, size_t data_size);\n\n    /**\n     * Write an uncoded AVFrame.\n     *\n     * See av_write_uncoded_frame() for details.\n     *\n     * The library will free *frame afterwards, but the muxer can prevent it\n     * by setting the pointer to NULL.\n     */\n    int (*write_uncoded_frame)(struct AVFormatContext *, int stream_index,\n                               AVFrame **frame, unsigned flags);\n    /**\n     * Returns device list with it properties.\n     * @see avdevice_list_devices() for more details.\n     */\n    int (*get_device_list)(struct AVFormatContext *s, struct AVDeviceInfoList *device_list);\n    /**\n     * Initialize device capabilities submodule.\n     * @see avdevice_capabilities_create() for more details.\n     */\n    int (*create_device_capabilities)(struct AVFormatContext *s, struct AVDeviceCapabilitiesQuery *caps);\n    /**\n     * Free device capabilities submodule.\n     * @see avdevice_capabilities_free() for more details.\n     */\n    int (*free_device_capabilities)(struct AVFormatContext *s, struct AVDeviceCapabilitiesQuery *caps);\n    enum AVCodecID data_codec; /**< default data codec */\n    /**\n     * Initialize format. May allocate data here, and set any AVFormatContext or\n     * AVStream parameters that need to be set before packets are sent.\n     * This method must not write output.\n     *\n     * Return 0 if streams were fully configured, 1 if not, negative AVERROR on failure\n     *\n     * Any allocations made here must be freed in deinit().\n     */\n    int (*init)(struct AVFormatContext *);\n    /**\n     * Deinitialize format. If present, this is called whenever the muxer is being\n     * destroyed, regardless of whether or not the header has been written.\n     *\n     * If a trailer is being written, this is called after write_trailer().\n     *\n     * This is called if init() fails as well.\n     */\n    void (*deinit)(struct AVFormatContext *);\n    /**\n     * Set up any necessary bitstream filtering and extract any extra data needed\n     * for the global header.\n     * Return 0 if more packets from this stream must be checked; 1 if not.\n     */\n    int (*check_bitstream)(struct AVFormatContext *, const AVPacket *pkt);\n} AVOutputFormat;\n/**\n * @}\n */\n\n/**\n * @addtogroup lavf_decoding\n * @{\n */\ntypedef struct AVInputFormat {\n    /**\n     * A comma separated list of short names for the format. New names\n     * may be appended with a minor bump.\n     */\n    const char *name;\n\n    /**\n     * Descriptive name for the format, meant to be more human-readable\n     * than name. You should use the NULL_IF_CONFIG_SMALL() macro\n     * to define it.\n     */\n    const char *long_name;\n\n    /**\n     * Can use flags: AVFMT_NOFILE, AVFMT_NEEDNUMBER, AVFMT_SHOW_IDS,\n     * AVFMT_GENERIC_INDEX, AVFMT_TS_DISCONT, AVFMT_NOBINSEARCH,\n     * AVFMT_NOGENSEARCH, AVFMT_NO_BYTE_SEEK, AVFMT_SEEK_TO_PTS.\n     */\n    int flags;\n\n    /**\n     * If extensions are defined, then no probe is done. You should\n     * usually not use extension format guessing because it is not\n     * reliable enough\n     */\n    const char *extensions;\n\n    const struct AVCodecTag * const *codec_tag;\n\n    const AVClass *priv_class; ///< AVClass for the private context\n\n    /**\n     * Comma-separated list of mime types.\n     * It is used check for matching mime types while probing.\n     * @see av_probe_input_format2\n     */\n    const char *mime_type;\n\n    /*****************************************************************\n     * No fields below this line are part of the public API. They\n     * may not be used outside of libavformat and can be changed and\n     * removed at will.\n     * New public fields should be added right above.\n     *****************************************************************\n     */\n    struct AVInputFormat *next;\n\n    /**\n     * Raw demuxers store their codec ID here.\n     */\n    int raw_codec_id;\n\n    /**\n     * Size of private data so that it can be allocated in the wrapper.\n     */\n    int priv_data_size;\n\n    /**\n     * Tell if a given file has a chance of being parsed as this format.\n     * The buffer provided is guaranteed to be AVPROBE_PADDING_SIZE bytes\n     * big so you do not have to check for that unless you need more.\n     */\n    int (*read_probe)(AVProbeData *);\n\n    /**\n     * Read the format header and initialize the AVFormatContext\n     * structure. Return 0 if OK. 'avformat_new_stream' should be\n     * called to create new streams.\n     */\n    int (*read_header)(struct AVFormatContext *);\n\n    /**\n     * Read one packet and put it in 'pkt'. pts and flags are also\n     * set. 'avformat_new_stream' can be called only if the flag\n     * AVFMTCTX_NOHEADER is used and only in the calling thread (not in a\n     * background thread).\n     * @return 0 on success, < 0 on error.\n     *         When returning an error, pkt must not have been allocated\n     *         or must be freed before returning\n     */\n    int (*read_packet)(struct AVFormatContext *, AVPacket *pkt);\n\n    /**\n     * Close the stream. The AVFormatContext and AVStreams are not\n     * freed by this function\n     */\n    int (*read_close)(struct AVFormatContext *);\n\n    /**\n     * Seek to a given timestamp relative to the frames in\n     * stream component stream_index.\n     * @param stream_index Must not be -1.\n     * @param flags Selects which direction should be preferred if no exact\n     *              match is available.\n     * @return >= 0 on success (but not necessarily the new offset)\n     */\n    int (*read_seek)(struct AVFormatContext *,\n                     int stream_index, int64_t timestamp, int flags);\n\n    /**\n     * Get the next timestamp in stream[stream_index].time_base units.\n     * @return the timestamp or AV_NOPTS_VALUE if an error occurred\n     */\n    int64_t (*read_timestamp)(struct AVFormatContext *s, int stream_index,\n                              int64_t *pos, int64_t pos_limit);\n\n    /**\n     * Start/resume playing - only meaningful if using a network-based format\n     * (RTSP).\n     */\n    int (*read_play)(struct AVFormatContext *);\n\n    /**\n     * Pause playing - only meaningful if using a network-based format\n     * (RTSP).\n     */\n    int (*read_pause)(struct AVFormatContext *);\n\n    /**\n     * Seek to timestamp ts.\n     * Seeking will be done so that the point from which all active streams\n     * can be presented successfully will be closest to ts and within min/max_ts.\n     * Active streams are all streams that have AVStream.discard < AVDISCARD_ALL.\n     */\n    int (*read_seek2)(struct AVFormatContext *s, int stream_index, int64_t min_ts, int64_t ts, int64_t max_ts, int flags);\n\n    /**\n     * Returns device list with it properties.\n     * @see avdevice_list_devices() for more details.\n     */\n    int (*get_device_list)(struct AVFormatContext *s, struct AVDeviceInfoList *device_list);\n\n    /**\n     * Initialize device capabilities submodule.\n     * @see avdevice_capabilities_create() for more details.\n     */\n    int (*create_device_capabilities)(struct AVFormatContext *s, struct AVDeviceCapabilitiesQuery *caps);\n\n    /**\n     * Free device capabilities submodule.\n     * @see avdevice_capabilities_free() for more details.\n     */\n    int (*free_device_capabilities)(struct AVFormatContext *s, struct AVDeviceCapabilitiesQuery *caps);\n} AVInputFormat;\n/**\n * @}\n */\n\nenum AVStreamParseType {\n    AVSTREAM_PARSE_NONE,\n    AVSTREAM_PARSE_FULL,       /**< full parsing and repack */\n    AVSTREAM_PARSE_HEADERS,    /**< Only parse headers, do not repack. */\n    AVSTREAM_PARSE_TIMESTAMPS, /**< full parsing and interpolation of timestamps for frames not starting on a packet boundary */\n    AVSTREAM_PARSE_FULL_ONCE,  /**< full parsing and repack of the first frame only, only implemented for H.264 currently */\n    AVSTREAM_PARSE_FULL_RAW,   /**< full parsing and repack with timestamp and position generation by parser for raw\n                                    this assumes that each packet in the file contains no demuxer level headers and\n                                    just codec level data, otherwise position generation would fail */\n};\n\ntypedef struct AVIndexEntry {\n    int64_t pos;\n    int64_t timestamp;        /**<\n                               * Timestamp in AVStream.time_base units, preferably the time from which on correctly decoded frames are available\n                               * when seeking to this entry. That means preferable PTS on keyframe based formats.\n                               * But demuxers can choose to store a different timestamp, if it is more convenient for the implementation or nothing better\n                               * is known\n                               */\n#define AVINDEX_KEYFRAME 0x0001\n#define AVINDEX_DISCARD_FRAME  0x0002    /**\n                                          * Flag is used to indicate which frame should be discarded after decoding.\n                                          */\n    int flags:2;\n    int size:30; //Yeah, trying to keep the size of this small to reduce memory requirements (it is 24 vs. 32 bytes due to possible 8-byte alignment).\n    int min_distance;         /**< Minimum distance between this and the previous keyframe, used to avoid unneeded searching. */\n} AVIndexEntry;\n\n#define AV_DISPOSITION_DEFAULT   0x0001\n#define AV_DISPOSITION_DUB       0x0002\n#define AV_DISPOSITION_ORIGINAL  0x0004\n#define AV_DISPOSITION_COMMENT   0x0008\n#define AV_DISPOSITION_LYRICS    0x0010\n#define AV_DISPOSITION_KARAOKE   0x0020\n\n/**\n * Track should be used during playback by default.\n * Useful for subtitle track that should be displayed\n * even when user did not explicitly ask for subtitles.\n */\n#define AV_DISPOSITION_FORCED    0x0040\n#define AV_DISPOSITION_HEARING_IMPAIRED  0x0080  /**< stream for hearing impaired audiences */\n#define AV_DISPOSITION_VISUAL_IMPAIRED   0x0100  /**< stream for visual impaired audiences */\n#define AV_DISPOSITION_CLEAN_EFFECTS     0x0200  /**< stream without voice */\n/**\n * The stream is stored in the file as an attached picture/\"cover art\" (e.g.\n * APIC frame in ID3v2). The first (usually only) packet associated with it\n * will be returned among the first few packets read from the file unless\n * seeking takes place. It can also be accessed at any time in\n * AVStream.attached_pic.\n */\n#define AV_DISPOSITION_ATTACHED_PIC      0x0400\n/**\n * The stream is sparse, and contains thumbnail images, often corresponding\n * to chapter markers. Only ever used with AV_DISPOSITION_ATTACHED_PIC.\n */\n#define AV_DISPOSITION_TIMED_THUMBNAILS  0x0800\n\ntypedef struct AVStreamInternal AVStreamInternal;\n\n/**\n * To specify text track kind (different from subtitles default).\n */\n#define AV_DISPOSITION_CAPTIONS     0x10000\n#define AV_DISPOSITION_DESCRIPTIONS 0x20000\n#define AV_DISPOSITION_METADATA     0x40000\n#define AV_DISPOSITION_DEPENDENT    0x80000 ///< dependent audio stream (mix_type=0 in mpegts)\n\n/**\n * Options for behavior on timestamp wrap detection.\n */\n#define AV_PTS_WRAP_IGNORE      0   ///< ignore the wrap\n#define AV_PTS_WRAP_ADD_OFFSET  1   ///< add the format specific offset on wrap detection\n#define AV_PTS_WRAP_SUB_OFFSET  -1  ///< subtract the format specific offset on wrap detection\n\n/**\n * Stream structure.\n * New fields can be added to the end with minor version bumps.\n * Removal, reordering and changes to existing fields require a major\n * version bump.\n * sizeof(AVStream) must not be used outside libav*.\n */\ntypedef struct AVStream {\n    int index;    /**< stream index in AVFormatContext */\n    /**\n     * Format-specific stream ID.\n     * decoding: set by libavformat\n     * encoding: set by the user, replaced by libavformat if left unset\n     */\n    int id;\n#if FF_API_LAVF_AVCTX\n    /**\n     * @deprecated use the codecpar struct instead\n     */\n    attribute_deprecated\n    AVCodecContext *codec;\n#endif\n    void *priv_data;\n\n    /**\n     * This is the fundamental unit of time (in seconds) in terms\n     * of which frame timestamps are represented.\n     *\n     * decoding: set by libavformat\n     * encoding: May be set by the caller before avformat_write_header() to\n     *           provide a hint to the muxer about the desired timebase. In\n     *           avformat_write_header(), the muxer will overwrite this field\n     *           with the timebase that will actually be used for the timestamps\n     *           written into the file (which may or may not be related to the\n     *           user-provided one, depending on the format).\n     */\n    AVRational time_base;\n\n    /**\n     * Decoding: pts of the first frame of the stream in presentation order, in stream time base.\n     * Only set this if you are absolutely 100% sure that the value you set\n     * it to really is the pts of the first frame.\n     * This may be undefined (AV_NOPTS_VALUE).\n     * @note The ASF header does NOT contain a correct start_time the ASF\n     * demuxer must NOT set this.\n     */\n    int64_t start_time;\n\n    /**\n     * Decoding: duration of the stream, in stream time base.\n     * If a source file does not specify a duration, but does specify\n     * a bitrate, this value will be estimated from bitrate and file size.\n     *\n     * Encoding: May be set by the caller before avformat_write_header() to\n     * provide a hint to the muxer about the estimated duration.\n     */\n    int64_t duration;\n\n    int64_t nb_frames;                 ///< number of frames in this stream if known or 0\n\n    int disposition; /**< AV_DISPOSITION_* bit field */\n\n    enum AVDiscard discard; ///< Selects which packets can be discarded at will and do not need to be demuxed.\n\n    /**\n     * sample aspect ratio (0 if unknown)\n     * - encoding: Set by user.\n     * - decoding: Set by libavformat.\n     */\n    AVRational sample_aspect_ratio;\n\n    AVDictionary *metadata;\n\n    /**\n     * Average framerate\n     *\n     * - demuxing: May be set by libavformat when creating the stream or in\n     *             avformat_find_stream_info().\n     * - muxing: May be set by the caller before avformat_write_header().\n     */\n    AVRational avg_frame_rate;\n\n    /**\n     * For streams with AV_DISPOSITION_ATTACHED_PIC disposition, this packet\n     * will contain the attached picture.\n     *\n     * decoding: set by libavformat, must not be modified by the caller.\n     * encoding: unused\n     */\n    AVPacket attached_pic;\n\n    /**\n     * An array of side data that applies to the whole stream (i.e. the\n     * container does not allow it to change between packets).\n     *\n     * There may be no overlap between the side data in this array and side data\n     * in the packets. I.e. a given side data is either exported by the muxer\n     * (demuxing) / set by the caller (muxing) in this array, then it never\n     * appears in the packets, or the side data is exported / sent through\n     * the packets (always in the first packet where the value becomes known or\n     * changes), then it does not appear in this array.\n     *\n     * - demuxing: Set by libavformat when the stream is created.\n     * - muxing: May be set by the caller before avformat_write_header().\n     *\n     * Freed by libavformat in avformat_free_context().\n     *\n     * @see av_format_inject_global_side_data()\n     */\n    AVPacketSideData *side_data;\n    /**\n     * The number of elements in the AVStream.side_data array.\n     */\n    int            nb_side_data;\n\n    /**\n     * Flags for the user to detect events happening on the stream. Flags must\n     * be cleared by the user once the event has been handled.\n     * A combination of AVSTREAM_EVENT_FLAG_*.\n     */\n    int event_flags;\n#define AVSTREAM_EVENT_FLAG_METADATA_UPDATED 0x0001 ///< The call resulted in updated metadata.\n\n    /**\n     * Real base framerate of the stream.\n     * This is the lowest framerate with which all timestamps can be\n     * represented accurately (it is the least common multiple of all\n     * framerates in the stream). Note, this value is just a guess!\n     * For example, if the time base is 1/90000 and all frames have either\n     * approximately 3600 or 1800 timer ticks, then r_frame_rate will be 50/1.\n     */\n    AVRational r_frame_rate;\n\n#if FF_API_LAVF_FFSERVER\n    /**\n     * String containing pairs of key and values describing recommended encoder configuration.\n     * Pairs are separated by ','.\n     * Keys are separated from values by '='.\n     *\n     * @deprecated unused\n     */\n    attribute_deprecated\n    char *recommended_encoder_configuration;\n#endif\n\n    /**\n     * Codec parameters associated with this stream. Allocated and freed by\n     * libavformat in avformat_new_stream() and avformat_free_context()\n     * respectively.\n     *\n     * - demuxing: filled by libavformat on stream creation or in\n     *             avformat_find_stream_info()\n     * - muxing: filled by the caller before avformat_write_header()\n     */\n    AVCodecParameters *codecpar;\n\n    /*****************************************************************\n     * All fields below this line are not part of the public API. They\n     * may not be used outside of libavformat and can be changed and\n     * removed at will.\n     * Internal note: be aware that physically removing these fields\n     * will break ABI. Replace removed fields with dummy fields, and\n     * add new fields to AVStreamInternal.\n     *****************************************************************\n     */\n\n#define MAX_STD_TIMEBASES (30*12+30+3+6)\n    /**\n     * Stream information used internally by avformat_find_stream_info()\n     */\n    struct {\n        int64_t last_dts;\n        int64_t duration_gcd;\n        int duration_count;\n        int64_t rfps_duration_sum;\n        double (*duration_error)[2][MAX_STD_TIMEBASES];\n        int64_t codec_info_duration;\n        int64_t codec_info_duration_fields;\n        int frame_delay_evidence;\n\n        /**\n         * 0  -> decoder has not been searched for yet.\n         * >0 -> decoder found\n         * <0 -> decoder with codec_id == -found_decoder has not been found\n         */\n        int found_decoder;\n\n        int64_t last_duration;\n\n        /**\n         * Those are used for average framerate estimation.\n         */\n        int64_t fps_first_dts;\n        int     fps_first_dts_idx;\n        int64_t fps_last_dts;\n        int     fps_last_dts_idx;\n\n    } *info;\n\n    int pts_wrap_bits; /**< number of bits in pts (used for wrapping control) */\n\n    // Timestamp generation support:\n    /**\n     * Timestamp corresponding to the last dts sync point.\n     *\n     * Initialized when AVCodecParserContext.dts_sync_point >= 0 and\n     * a DTS is received from the underlying container. Otherwise set to\n     * AV_NOPTS_VALUE by default.\n     */\n    int64_t first_dts;\n    int64_t cur_dts;\n    int64_t last_IP_pts;\n    int last_IP_duration;\n\n    /**\n     * Number of packets to buffer for codec probing\n     */\n    int probe_packets;\n\n    /**\n     * Number of frames that have been demuxed during avformat_find_stream_info()\n     */\n    int codec_info_nb_frames;\n\n    /* av_read_frame() support */\n    enum AVStreamParseType need_parsing;\n    struct AVCodecParserContext *parser;\n\n    /**\n     * last packet in packet_buffer for this stream when muxing.\n     */\n    struct AVPacketList *last_in_packet_buffer;\n    AVProbeData probe_data;\n#define MAX_REORDER_DELAY 16\n    int64_t pts_buffer[MAX_REORDER_DELAY+1];\n\n    AVIndexEntry *index_entries; /**< Only used if the format does not\n                                    support seeking natively. */\n    int nb_index_entries;\n    unsigned int index_entries_allocated_size;\n\n    /**\n     * Stream Identifier\n     * This is the MPEG-TS stream identifier +1\n     * 0 means unknown\n     */\n    int stream_identifier;\n\n    int64_t interleaver_chunk_size;\n    int64_t interleaver_chunk_duration;\n\n    /**\n     * stream probing state\n     * -1   -> probing finished\n     *  0   -> no probing requested\n     * rest -> perform probing with request_probe being the minimum score to accept.\n     * NOT PART OF PUBLIC API\n     */\n    int request_probe;\n    /**\n     * Indicates that everything up to the next keyframe\n     * should be discarded.\n     */\n    int skip_to_keyframe;\n\n    /**\n     * Number of samples to skip at the start of the frame decoded from the next packet.\n     */\n    int skip_samples;\n\n    /**\n     * If not 0, the number of samples that should be skipped from the start of\n     * the stream (the samples are removed from packets with pts==0, which also\n     * assumes negative timestamps do not happen).\n     * Intended for use with formats such as mp3 with ad-hoc gapless audio\n     * support.\n     */\n    int64_t start_skip_samples;\n\n    /**\n     * If not 0, the first audio sample that should be discarded from the stream.\n     * This is broken by design (needs global sample count), but can't be\n     * avoided for broken by design formats such as mp3 with ad-hoc gapless\n     * audio support.\n     */\n    int64_t first_discard_sample;\n\n    /**\n     * The sample after last sample that is intended to be discarded after\n     * first_discard_sample. Works on frame boundaries only. Used to prevent\n     * early EOF if the gapless info is broken (considered concatenated mp3s).\n     */\n    int64_t last_discard_sample;\n\n    /**\n     * Number of internally decoded frames, used internally in libavformat, do not access\n     * its lifetime differs from info which is why it is not in that structure.\n     */\n    int nb_decoded_frames;\n\n    /**\n     * Timestamp offset added to timestamps before muxing\n     * NOT PART OF PUBLIC API\n     */\n    int64_t mux_ts_offset;\n\n    /**\n     * Internal data to check for wrapping of the time stamp\n     */\n    int64_t pts_wrap_reference;\n\n    /**\n     * Options for behavior, when a wrap is detected.\n     *\n     * Defined by AV_PTS_WRAP_ values.\n     *\n     * If correction is enabled, there are two possibilities:\n     * If the first time stamp is near the wrap point, the wrap offset\n     * will be subtracted, which will create negative time stamps.\n     * Otherwise the offset will be added.\n     */\n    int pts_wrap_behavior;\n\n    /**\n     * Internal data to prevent doing update_initial_durations() twice\n     */\n    int update_initial_durations_done;\n\n    /**\n     * Internal data to generate dts from pts\n     */\n    int64_t pts_reorder_error[MAX_REORDER_DELAY+1];\n    uint8_t pts_reorder_error_count[MAX_REORDER_DELAY+1];\n\n    /**\n     * Internal data to analyze DTS and detect faulty mpeg streams\n     */\n    int64_t last_dts_for_order_check;\n    uint8_t dts_ordered;\n    uint8_t dts_misordered;\n\n    /**\n     * Internal data to inject global side data\n     */\n    int inject_global_side_data;\n\n    /**\n     * display aspect ratio (0 if unknown)\n     * - encoding: unused\n     * - decoding: Set by libavformat to calculate sample_aspect_ratio internally\n     */\n    AVRational display_aspect_ratio;\n\n    /**\n     * An opaque field for libavformat internal usage.\n     * Must not be accessed in any way by callers.\n     */\n    AVStreamInternal *internal;\n} AVStream;\n\n#if FF_API_FORMAT_GET_SET\n/**\n * Accessors for some AVStream fields. These used to be provided for ABI\n * compatibility, and do not need to be used anymore.\n */\nattribute_deprecated\nAVRational av_stream_get_r_frame_rate(const AVStream *s);\nattribute_deprecated\nvoid       av_stream_set_r_frame_rate(AVStream *s, AVRational r);\n#if FF_API_LAVF_FFSERVER\nattribute_deprecated\nchar* av_stream_get_recommended_encoder_configuration(const AVStream *s);\nattribute_deprecated\nvoid  av_stream_set_recommended_encoder_configuration(AVStream *s, char *configuration);\n#endif\n#endif\n\nstruct AVCodecParserContext *av_stream_get_parser(const AVStream *s);\n\n/**\n * Returns the pts of the last muxed packet + its duration\n *\n * the retuned value is undefined when used with a demuxer.\n */\nint64_t    av_stream_get_end_pts(const AVStream *st);\n\n#define AV_PROGRAM_RUNNING 1\n\n/**\n * New fields can be added to the end with minor version bumps.\n * Removal, reordering and changes to existing fields require a major\n * version bump.\n * sizeof(AVProgram) must not be used outside libav*.\n */\ntypedef struct AVProgram {\n    int            id;\n    int            flags;\n    enum AVDiscard discard;        ///< selects which program to discard and which to feed to the caller\n    unsigned int   *stream_index;\n    unsigned int   nb_stream_indexes;\n    AVDictionary *metadata;\n\n    int program_num;\n    int pmt_pid;\n    int pcr_pid;\n\n    /*****************************************************************\n     * All fields below this line are not part of the public API. They\n     * may not be used outside of libavformat and can be changed and\n     * removed at will.\n     * New public fields should be added right above.\n     *****************************************************************\n     */\n    int64_t start_time;\n    int64_t end_time;\n\n    int64_t pts_wrap_reference;    ///< reference dts for wrap detection\n    int pts_wrap_behavior;         ///< behavior on wrap detection\n} AVProgram;\n\n#define AVFMTCTX_NOHEADER      0x0001 /**< signal that no header is present\n                                         (streams are added dynamically) */\n#define AVFMTCTX_UNSEEKABLE    0x0002 /**< signal that the stream is definitely\n                                         not seekable, and attempts to call the\n                                         seek function will fail. For some\n                                         network protocols (e.g. HLS), this can\n                                         change dynamically at runtime. */\n\ntypedef struct AVChapter {\n    int id;                 ///< unique ID to identify the chapter\n    AVRational time_base;   ///< time base in which the start/end timestamps are specified\n    int64_t start, end;     ///< chapter start/end time in time_base units\n    AVDictionary *metadata;\n} AVChapter;\n\n\n/**\n * Callback used by devices to communicate with application.\n */\ntypedef int (*av_format_control_message)(struct AVFormatContext *s, int type,\n                                         void *data, size_t data_size);\n\ntypedef int (*AVOpenCallback)(struct AVFormatContext *s, AVIOContext **pb, const char *url, int flags,\n                              const AVIOInterruptCB *int_cb, AVDictionary **options);\n\n/**\n * The duration of a video can be estimated through various ways, and this enum can be used\n * to know how the duration was estimated.\n */\nenum AVDurationEstimationMethod {\n    AVFMT_DURATION_FROM_PTS,    ///< Duration accurately estimated from PTSes\n    AVFMT_DURATION_FROM_STREAM, ///< Duration estimated from a stream with a known duration\n    AVFMT_DURATION_FROM_BITRATE ///< Duration estimated from bitrate (less accurate)\n};\n\ntypedef struct AVFormatInternal AVFormatInternal;\n\n/**\n * Format I/O context.\n * New fields can be added to the end with minor version bumps.\n * Removal, reordering and changes to existing fields require a major\n * version bump.\n * sizeof(AVFormatContext) must not be used outside libav*, use\n * avformat_alloc_context() to create an AVFormatContext.\n *\n * Fields can be accessed through AVOptions (av_opt*),\n * the name string used matches the associated command line parameter name and\n * can be found in libavformat/options_table.h.\n * The AVOption/command line parameter names differ in some cases from the C\n * structure field names for historic reasons or brevity.\n */\ntypedef struct AVFormatContext {\n    /**\n     * A class for logging and @ref avoptions. Set by avformat_alloc_context().\n     * Exports (de)muxer private options if they exist.\n     */\n    const AVClass *av_class;\n\n    /**\n     * The input container format.\n     *\n     * Demuxing only, set by avformat_open_input().\n     */\n    struct AVInputFormat *iformat;\n\n    /**\n     * The output container format.\n     *\n     * Muxing only, must be set by the caller before avformat_write_header().\n     */\n    struct AVOutputFormat *oformat;\n\n    /**\n     * Format private data. This is an AVOptions-enabled struct\n     * if and only if iformat/oformat.priv_class is not NULL.\n     *\n     * - muxing: set by avformat_write_header()\n     * - demuxing: set by avformat_open_input()\n     */\n    void *priv_data;\n\n    /**\n     * I/O context.\n     *\n     * - demuxing: either set by the user before avformat_open_input() (then\n     *             the user must close it manually) or set by avformat_open_input().\n     * - muxing: set by the user before avformat_write_header(). The caller must\n     *           take care of closing / freeing the IO context.\n     *\n     * Do NOT set this field if AVFMT_NOFILE flag is set in\n     * iformat/oformat.flags. In such a case, the (de)muxer will handle\n     * I/O in some other way and this field will be NULL.\n     */\n    AVIOContext *pb;\n\n    /* stream info */\n    /**\n     * Flags signalling stream properties. A combination of AVFMTCTX_*.\n     * Set by libavformat.\n     */\n    int ctx_flags;\n\n    /**\n     * Number of elements in AVFormatContext.streams.\n     *\n     * Set by avformat_new_stream(), must not be modified by any other code.\n     */\n    unsigned int nb_streams;\n    /**\n     * A list of all streams in the file. New streams are created with\n     * avformat_new_stream().\n     *\n     * - demuxing: streams are created by libavformat in avformat_open_input().\n     *             If AVFMTCTX_NOHEADER is set in ctx_flags, then new streams may also\n     *             appear in av_read_frame().\n     * - muxing: streams are created by the user before avformat_write_header().\n     *\n     * Freed by libavformat in avformat_free_context().\n     */\n    AVStream **streams;\n\n#if FF_API_FORMAT_FILENAME\n    /**\n     * input or output filename\n     *\n     * - demuxing: set by avformat_open_input()\n     * - muxing: may be set by the caller before avformat_write_header()\n     *\n     * @deprecated Use url instead.\n     */\n    attribute_deprecated\n    char filename[1024];\n#endif\n\n    /**\n     * input or output URL. Unlike the old filename field, this field has no\n     * length restriction.\n     *\n     * - demuxing: set by avformat_open_input(), initialized to an empty\n     *             string if url parameter was NULL in avformat_open_input().\n     * - muxing: may be set by the caller before calling avformat_write_header()\n     *           (or avformat_init_output() if that is called first) to a string\n     *           which is freeable by av_free(). Set to an empty string if it\n     *           was NULL in avformat_init_output().\n     *\n     * Freed by libavformat in avformat_free_context().\n     */\n    char *url;\n\n    /**\n     * Position of the first frame of the component, in\n     * AV_TIME_BASE fractional seconds. NEVER set this value directly:\n     * It is deduced from the AVStream values.\n     *\n     * Demuxing only, set by libavformat.\n     */\n    int64_t start_time;\n\n    /**\n     * Duration of the stream, in AV_TIME_BASE fractional\n     * seconds. Only set this value if you know none of the individual stream\n     * durations and also do not set any of them. This is deduced from the\n     * AVStream values if not set.\n     *\n     * Demuxing only, set by libavformat.\n     */\n    int64_t duration;\n\n    /**\n     * Total stream bitrate in bit/s, 0 if not\n     * available. Never set it directly if the file_size and the\n     * duration are known as FFmpeg can compute it automatically.\n     */\n    int64_t bit_rate;\n\n    unsigned int packet_size;\n    int max_delay;\n\n    /**\n     * Flags modifying the (de)muxer behaviour. A combination of AVFMT_FLAG_*.\n     * Set by the user before avformat_open_input() / avformat_write_header().\n     */\n    int flags;\n#define AVFMT_FLAG_GENPTS       0x0001 ///< Generate missing pts even if it requires parsing future frames.\n#define AVFMT_FLAG_IGNIDX       0x0002 ///< Ignore index.\n#define AVFMT_FLAG_NONBLOCK     0x0004 ///< Do not block when reading packets from input.\n#define AVFMT_FLAG_IGNDTS       0x0008 ///< Ignore DTS on frames that contain both DTS & PTS\n#define AVFMT_FLAG_NOFILLIN     0x0010 ///< Do not infer any values from other values, just return what is stored in the container\n#define AVFMT_FLAG_NOPARSE      0x0020 ///< Do not use AVParsers, you also must set AVFMT_FLAG_NOFILLIN as the fillin code works on frames and no parsing -> no frames. Also seeking to frames can not work if parsing to find frame boundaries has been disabled\n#define AVFMT_FLAG_NOBUFFER     0x0040 ///< Do not buffer frames when possible\n#define AVFMT_FLAG_CUSTOM_IO    0x0080 ///< The caller has supplied a custom AVIOContext, don't avio_close() it.\n#define AVFMT_FLAG_DISCARD_CORRUPT  0x0100 ///< Discard frames marked corrupted\n#define AVFMT_FLAG_FLUSH_PACKETS    0x0200 ///< Flush the AVIOContext every packet.\n/**\n * When muxing, try to avoid writing any random/volatile data to the output.\n * This includes any random IDs, real-time timestamps/dates, muxer version, etc.\n *\n * This flag is mainly intended for testing.\n */\n#define AVFMT_FLAG_BITEXACT         0x0400\n#define AVFMT_FLAG_MP4A_LATM    0x8000 ///< Enable RTP MP4A-LATM payload\n#define AVFMT_FLAG_SORT_DTS    0x10000 ///< try to interleave outputted packets by dts (using this flag can slow demuxing down)\n#define AVFMT_FLAG_PRIV_OPT    0x20000 ///< Enable use of private options by delaying codec open (this could be made default once all code is converted)\n#if FF_API_LAVF_KEEPSIDE_FLAG\n#define AVFMT_FLAG_KEEP_SIDE_DATA 0x40000 ///< Deprecated, does nothing.\n#endif\n#define AVFMT_FLAG_FAST_SEEK   0x80000 ///< Enable fast, but inaccurate seeks for some formats\n#define AVFMT_FLAG_SHORTEST   0x100000 ///< Stop muxing when the shortest stream stops.\n#define AVFMT_FLAG_AUTO_BSF   0x200000 ///< Add bitstream filters as requested by the muxer\n\n    /**\n     * Maximum size of the data read from input for determining\n     * the input container format.\n     * Demuxing only, set by the caller before avformat_open_input().\n     */\n    int64_t probesize;\n\n    /**\n     * Maximum duration (in AV_TIME_BASE units) of the data read\n     * from input in avformat_find_stream_info().\n     * Demuxing only, set by the caller before avformat_find_stream_info().\n     * Can be set to 0 to let avformat choose using a heuristic.\n     */\n    int64_t max_analyze_duration;\n\n    const uint8_t *key;\n    int keylen;\n\n    unsigned int nb_programs;\n    AVProgram **programs;\n\n    /**\n     * Forced video codec_id.\n     * Demuxing: Set by user.\n     */\n    enum AVCodecID video_codec_id;\n\n    /**\n     * Forced audio codec_id.\n     * Demuxing: Set by user.\n     */\n    enum AVCodecID audio_codec_id;\n\n    /**\n     * Forced subtitle codec_id.\n     * Demuxing: Set by user.\n     */\n    enum AVCodecID subtitle_codec_id;\n\n    /**\n     * Maximum amount of memory in bytes to use for the index of each stream.\n     * If the index exceeds this size, entries will be discarded as\n     * needed to maintain a smaller size. This can lead to slower or less\n     * accurate seeking (depends on demuxer).\n     * Demuxers for which a full in-memory index is mandatory will ignore\n     * this.\n     * - muxing: unused\n     * - demuxing: set by user\n     */\n    unsigned int max_index_size;\n\n    /**\n     * Maximum amount of memory in bytes to use for buffering frames\n     * obtained from realtime capture devices.\n     */\n    unsigned int max_picture_buffer;\n\n    /**\n     * Number of chapters in AVChapter array.\n     * When muxing, chapters are normally written in the file header,\n     * so nb_chapters should normally be initialized before write_header\n     * is called. Some muxers (e.g. mov and mkv) can also write chapters\n     * in the trailer.  To write chapters in the trailer, nb_chapters\n     * must be zero when write_header is called and non-zero when\n     * write_trailer is called.\n     * - muxing: set by user\n     * - demuxing: set by libavformat\n     */\n    unsigned int nb_chapters;\n    AVChapter **chapters;\n\n    /**\n     * Metadata that applies to the whole file.\n     *\n     * - demuxing: set by libavformat in avformat_open_input()\n     * - muxing: may be set by the caller before avformat_write_header()\n     *\n     * Freed by libavformat in avformat_free_context().\n     */\n    AVDictionary *metadata;\n\n    /**\n     * Start time of the stream in real world time, in microseconds\n     * since the Unix epoch (00:00 1st January 1970). That is, pts=0 in the\n     * stream was captured at this real world time.\n     * - muxing: Set by the caller before avformat_write_header(). If set to\n     *           either 0 or AV_NOPTS_VALUE, then the current wall-time will\n     *           be used.\n     * - demuxing: Set by libavformat. AV_NOPTS_VALUE if unknown. Note that\n     *             the value may become known after some number of frames\n     *             have been received.\n     */\n    int64_t start_time_realtime;\n\n    /**\n     * The number of frames used for determining the framerate in\n     * avformat_find_stream_info().\n     * Demuxing only, set by the caller before avformat_find_stream_info().\n     */\n    int fps_probe_size;\n\n    /**\n     * Error recognition; higher values will detect more errors but may\n     * misdetect some more or less valid parts as errors.\n     * Demuxing only, set by the caller before avformat_open_input().\n     */\n    int error_recognition;\n\n    /**\n     * Custom interrupt callbacks for the I/O layer.\n     *\n     * demuxing: set by the user before avformat_open_input().\n     * muxing: set by the user before avformat_write_header()\n     * (mainly useful for AVFMT_NOFILE formats). The callback\n     * should also be passed to avio_open2() if it's used to\n     * open the file.\n     */\n    AVIOInterruptCB interrupt_callback;\n\n    /**\n     * Flags to enable debugging.\n     */\n    int debug;\n#define FF_FDEBUG_TS        0x0001\n\n    /**\n     * Maximum buffering duration for interleaving.\n     *\n     * To ensure all the streams are interleaved correctly,\n     * av_interleaved_write_frame() will wait until it has at least one packet\n     * for each stream before actually writing any packets to the output file.\n     * When some streams are \"sparse\" (i.e. there are large gaps between\n     * successive packets), this can result in excessive buffering.\n     *\n     * This field specifies the maximum difference between the timestamps of the\n     * first and the last packet in the muxing queue, above which libavformat\n     * will output a packet regardless of whether it has queued a packet for all\n     * the streams.\n     *\n     * Muxing only, set by the caller before avformat_write_header().\n     */\n    int64_t max_interleave_delta;\n\n    /**\n     * Allow non-standard and experimental extension\n     * @see AVCodecContext.strict_std_compliance\n     */\n    int strict_std_compliance;\n\n    /**\n     * Flags for the user to detect events happening on the file. Flags must\n     * be cleared by the user once the event has been handled.\n     * A combination of AVFMT_EVENT_FLAG_*.\n     */\n    int event_flags;\n#define AVFMT_EVENT_FLAG_METADATA_UPDATED 0x0001 ///< The call resulted in updated metadata.\n\n    /**\n     * Maximum number of packets to read while waiting for the first timestamp.\n     * Decoding only.\n     */\n    int max_ts_probe;\n\n    /**\n     * Avoid negative timestamps during muxing.\n     * Any value of the AVFMT_AVOID_NEG_TS_* constants.\n     * Note, this only works when using av_interleaved_write_frame. (interleave_packet_per_dts is in use)\n     * - muxing: Set by user\n     * - demuxing: unused\n     */\n    int avoid_negative_ts;\n#define AVFMT_AVOID_NEG_TS_AUTO             -1 ///< Enabled when required by target format\n#define AVFMT_AVOID_NEG_TS_MAKE_NON_NEGATIVE 1 ///< Shift timestamps so they are non negative\n#define AVFMT_AVOID_NEG_TS_MAKE_ZERO         2 ///< Shift timestamps so that they start at 0\n\n    /**\n     * Transport stream id.\n     * This will be moved into demuxer private options. Thus no API/ABI compatibility\n     */\n    int ts_id;\n\n    /**\n     * Audio preload in microseconds.\n     * Note, not all formats support this and unpredictable things may happen if it is used when not supported.\n     * - encoding: Set by user\n     * - decoding: unused\n     */\n    int audio_preload;\n\n    /**\n     * Max chunk time in microseconds.\n     * Note, not all formats support this and unpredictable things may happen if it is used when not supported.\n     * - encoding: Set by user\n     * - decoding: unused\n     */\n    int max_chunk_duration;\n\n    /**\n     * Max chunk size in bytes\n     * Note, not all formats support this and unpredictable things may happen if it is used when not supported.\n     * - encoding: Set by user\n     * - decoding: unused\n     */\n    int max_chunk_size;\n\n    /**\n     * forces the use of wallclock timestamps as pts/dts of packets\n     * This has undefined results in the presence of B frames.\n     * - encoding: unused\n     * - decoding: Set by user\n     */\n    int use_wallclock_as_timestamps;\n\n    /**\n     * avio flags, used to force AVIO_FLAG_DIRECT.\n     * - encoding: unused\n     * - decoding: Set by user\n     */\n    int avio_flags;\n\n    /**\n     * The duration field can be estimated through various ways, and this field can be used\n     * to know how the duration was estimated.\n     * - encoding: unused\n     * - decoding: Read by user\n     */\n    enum AVDurationEstimationMethod duration_estimation_method;\n\n    /**\n     * Skip initial bytes when opening stream\n     * - encoding: unused\n     * - decoding: Set by user\n     */\n    int64_t skip_initial_bytes;\n\n    /**\n     * Correct single timestamp overflows\n     * - encoding: unused\n     * - decoding: Set by user\n     */\n    unsigned int correct_ts_overflow;\n\n    /**\n     * Force seeking to any (also non key) frames.\n     * - encoding: unused\n     * - decoding: Set by user\n     */\n    int seek2any;\n\n    /**\n     * Flush the I/O context after each packet.\n     * - encoding: Set by user\n     * - decoding: unused\n     */\n    int flush_packets;\n\n    /**\n     * format probing score.\n     * The maximal score is AVPROBE_SCORE_MAX, its set when the demuxer probes\n     * the format.\n     * - encoding: unused\n     * - decoding: set by avformat, read by user\n     */\n    int probe_score;\n\n    /**\n     * number of bytes to read maximally to identify format.\n     * - encoding: unused\n     * - decoding: set by user\n     */\n    int format_probesize;\n\n    /**\n     * ',' separated list of allowed decoders.\n     * If NULL then all are allowed\n     * - encoding: unused\n     * - decoding: set by user\n     */\n    char *codec_whitelist;\n\n    /**\n     * ',' separated list of allowed demuxers.\n     * If NULL then all are allowed\n     * - encoding: unused\n     * - decoding: set by user\n     */\n    char *format_whitelist;\n\n    /**\n     * An opaque field for libavformat internal usage.\n     * Must not be accessed in any way by callers.\n     */\n    AVFormatInternal *internal;\n\n    /**\n     * IO repositioned flag.\n     * This is set by avformat when the underlaying IO context read pointer\n     * is repositioned, for example when doing byte based seeking.\n     * Demuxers can use the flag to detect such changes.\n     */\n    int io_repositioned;\n\n    /**\n     * Forced video codec.\n     * This allows forcing a specific decoder, even when there are multiple with\n     * the same codec_id.\n     * Demuxing: Set by user\n     */\n    AVCodec *video_codec;\n\n    /**\n     * Forced audio codec.\n     * This allows forcing a specific decoder, even when there are multiple with\n     * the same codec_id.\n     * Demuxing: Set by user\n     */\n    AVCodec *audio_codec;\n\n    /**\n     * Forced subtitle codec.\n     * This allows forcing a specific decoder, even when there are multiple with\n     * the same codec_id.\n     * Demuxing: Set by user\n     */\n    AVCodec *subtitle_codec;\n\n    /**\n     * Forced data codec.\n     * This allows forcing a specific decoder, even when there are multiple with\n     * the same codec_id.\n     * Demuxing: Set by user\n     */\n    AVCodec *data_codec;\n\n    /**\n     * Number of bytes to be written as padding in a metadata header.\n     * Demuxing: Unused.\n     * Muxing: Set by user via av_format_set_metadata_header_padding.\n     */\n    int metadata_header_padding;\n\n    /**\n     * User data.\n     * This is a place for some private data of the user.\n     */\n    void *opaque;\n\n    /**\n     * Callback used by devices to communicate with application.\n     */\n    av_format_control_message control_message_cb;\n\n    /**\n     * Output timestamp offset, in microseconds.\n     * Muxing: set by user\n     */\n    int64_t output_ts_offset;\n\n    /**\n     * dump format separator.\n     * can be \", \" or \"\\n      \" or anything else\n     * - muxing: Set by user.\n     * - demuxing: Set by user.\n     */\n    uint8_t *dump_separator;\n\n    /**\n     * Forced Data codec_id.\n     * Demuxing: Set by user.\n     */\n    enum AVCodecID data_codec_id;\n\n#if FF_API_OLD_OPEN_CALLBACKS\n    /**\n     * Called to open further IO contexts when needed for demuxing.\n     *\n     * This can be set by the user application to perform security checks on\n     * the URLs before opening them.\n     * The function should behave like avio_open2(), AVFormatContext is provided\n     * as contextual information and to reach AVFormatContext.opaque.\n     *\n     * If NULL then some simple checks are used together with avio_open2().\n     *\n     * Must not be accessed directly from outside avformat.\n     * @See av_format_set_open_cb()\n     *\n     * Demuxing: Set by user.\n     *\n     * @deprecated Use io_open and io_close.\n     */\n    attribute_deprecated\n    int (*open_cb)(struct AVFormatContext *s, AVIOContext **p, const char *url, int flags, const AVIOInterruptCB *int_cb, AVDictionary **options);\n#endif\n\n    /**\n     * ',' separated list of allowed protocols.\n     * - encoding: unused\n     * - decoding: set by user\n     */\n    char *protocol_whitelist;\n\n    /**\n     * A callback for opening new IO streams.\n     *\n     * Whenever a muxer or a demuxer needs to open an IO stream (typically from\n     * avformat_open_input() for demuxers, but for certain formats can happen at\n     * other times as well), it will call this callback to obtain an IO context.\n     *\n     * @param s the format context\n     * @param pb on success, the newly opened IO context should be returned here\n     * @param url the url to open\n     * @param flags a combination of AVIO_FLAG_*\n     * @param options a dictionary of additional options, with the same\n     *                semantics as in avio_open2()\n     * @return 0 on success, a negative AVERROR code on failure\n     *\n     * @note Certain muxers and demuxers do nesting, i.e. they open one or more\n     * additional internal format contexts. Thus the AVFormatContext pointer\n     * passed to this callback may be different from the one facing the caller.\n     * It will, however, have the same 'opaque' field.\n     */\n    int (*io_open)(struct AVFormatContext *s, AVIOContext **pb, const char *url,\n                   int flags, AVDictionary **options);\n\n    /**\n     * A callback for closing the streams opened with AVFormatContext.io_open().\n     */\n    void (*io_close)(struct AVFormatContext *s, AVIOContext *pb);\n\n    /**\n     * ',' separated list of disallowed protocols.\n     * - encoding: unused\n     * - decoding: set by user\n     */\n    char *protocol_blacklist;\n\n    /**\n     * The maximum number of streams.\n     * - encoding: unused\n     * - decoding: set by user\n     */\n    int max_streams;\n} AVFormatContext;\n\n#if FF_API_FORMAT_GET_SET\n/**\n * Accessors for some AVFormatContext fields. These used to be provided for ABI\n * compatibility, and do not need to be used anymore.\n */\nattribute_deprecated\nint av_format_get_probe_score(const AVFormatContext *s);\nattribute_deprecated\nAVCodec * av_format_get_video_codec(const AVFormatContext *s);\nattribute_deprecated\nvoid      av_format_set_video_codec(AVFormatContext *s, AVCodec *c);\nattribute_deprecated\nAVCodec * av_format_get_audio_codec(const AVFormatContext *s);\nattribute_deprecated\nvoid      av_format_set_audio_codec(AVFormatContext *s, AVCodec *c);\nattribute_deprecated\nAVCodec * av_format_get_subtitle_codec(const AVFormatContext *s);\nattribute_deprecated\nvoid      av_format_set_subtitle_codec(AVFormatContext *s, AVCodec *c);\nattribute_deprecated\nAVCodec * av_format_get_data_codec(const AVFormatContext *s);\nattribute_deprecated\nvoid      av_format_set_data_codec(AVFormatContext *s, AVCodec *c);\nattribute_deprecated\nint       av_format_get_metadata_header_padding(const AVFormatContext *s);\nattribute_deprecated\nvoid      av_format_set_metadata_header_padding(AVFormatContext *s, int c);\nattribute_deprecated\nvoid *    av_format_get_opaque(const AVFormatContext *s);\nattribute_deprecated\nvoid      av_format_set_opaque(AVFormatContext *s, void *opaque);\nattribute_deprecated\nav_format_control_message av_format_get_control_message_cb(const AVFormatContext *s);\nattribute_deprecated\nvoid      av_format_set_control_message_cb(AVFormatContext *s, av_format_control_message callback);\n#if FF_API_OLD_OPEN_CALLBACKS\nattribute_deprecated AVOpenCallback av_format_get_open_cb(const AVFormatContext *s);\nattribute_deprecated void av_format_set_open_cb(AVFormatContext *s, AVOpenCallback callback);\n#endif\n#endif\n\n/**\n * This function will cause global side data to be injected in the next packet\n * of each stream as well as after any subsequent seek.\n */\nvoid av_format_inject_global_side_data(AVFormatContext *s);\n\n/**\n * Returns the method used to set ctx->duration.\n *\n * @return AVFMT_DURATION_FROM_PTS, AVFMT_DURATION_FROM_STREAM, or AVFMT_DURATION_FROM_BITRATE.\n */\nenum AVDurationEstimationMethod av_fmt_ctx_get_duration_estimation_method(const AVFormatContext* ctx);\n\ntypedef struct AVPacketList {\n    AVPacket pkt;\n    struct AVPacketList *next;\n} AVPacketList;\n\n\n/**\n * @defgroup lavf_core Core functions\n * @ingroup libavf\n *\n * Functions for querying libavformat capabilities, allocating core structures,\n * etc.\n * @{\n */\n\n/**\n * Return the LIBAVFORMAT_VERSION_INT constant.\n */\nunsigned avformat_version(void);\n\n/**\n * Return the libavformat build-time configuration.\n */\nconst char *avformat_configuration(void);\n\n/**\n * Return the libavformat license.\n */\nconst char *avformat_license(void);\n\n#if FF_API_NEXT\n/**\n * Initialize libavformat and register all the muxers, demuxers and\n * protocols. If you do not call this function, then you can select\n * exactly which formats you want to support.\n *\n * @see av_register_input_format()\n * @see av_register_output_format()\n */\nattribute_deprecated\nvoid av_register_all(void);\n\nattribute_deprecated\nvoid av_register_input_format(AVInputFormat *format);\nattribute_deprecated\nvoid av_register_output_format(AVOutputFormat *format);\n#endif\n\n/**\n * Do global initialization of network libraries. This is optional,\n * and not recommended anymore.\n *\n * This functions only exists to work around thread-safety issues\n * with older GnuTLS or OpenSSL libraries. If libavformat is linked\n * to newer versions of those libraries, or if you do not use them,\n * calling this function is unnecessary. Otherwise, you need to call\n * this function before any other threads using them are started.\n *\n * This function will be deprecated once support for older GnuTLS and\n * OpenSSL libraries is removed, and this function has no purpose\n * anymore.\n */\nint avformat_network_init(void);\n\n/**\n * Undo the initialization done by avformat_network_init. Call it only\n * once for each time you called avformat_network_init.\n */\nint avformat_network_deinit(void);\n\n#if FF_API_NEXT\n/**\n * If f is NULL, returns the first registered input format,\n * if f is non-NULL, returns the next registered input format after f\n * or NULL if f is the last one.\n */\nattribute_deprecated\nAVInputFormat  *av_iformat_next(const AVInputFormat  *f);\n\n/**\n * If f is NULL, returns the first registered output format,\n * if f is non-NULL, returns the next registered output format after f\n * or NULL if f is the last one.\n */\nattribute_deprecated\nAVOutputFormat *av_oformat_next(const AVOutputFormat *f);\n#endif\n\n/**\n * Iterate over all registered muxers.\n *\n * @param opaque a pointer where libavformat will store the iteration state. Must\n *               point to NULL to start the iteration.\n *\n * @return the next registered muxer or NULL when the iteration is\n *         finished\n */\nconst AVOutputFormat *av_muxer_iterate(void **opaque);\n\n/**\n * Iterate over all registered demuxers.\n *\n * @param opaque a pointer where libavformat will store the iteration state. Must\n *               point to NULL to start the iteration.\n *\n * @return the next registered demuxer or NULL when the iteration is\n *         finished\n */\nconst AVInputFormat *av_demuxer_iterate(void **opaque);\n\n/**\n * Allocate an AVFormatContext.\n * avformat_free_context() can be used to free the context and everything\n * allocated by the framework within it.\n */\nAVFormatContext *avformat_alloc_context(void);\n\n/**\n * Free an AVFormatContext and all its streams.\n * @param s context to free\n */\nvoid avformat_free_context(AVFormatContext *s);\n\n/**\n * Get the AVClass for AVFormatContext. It can be used in combination with\n * AV_OPT_SEARCH_FAKE_OBJ for examining options.\n *\n * @see av_opt_find().\n */\nconst AVClass *avformat_get_class(void);\n\n/**\n * Add a new stream to a media file.\n *\n * When demuxing, it is called by the demuxer in read_header(). If the\n * flag AVFMTCTX_NOHEADER is set in s.ctx_flags, then it may also\n * be called in read_packet().\n *\n * When muxing, should be called by the user before avformat_write_header().\n *\n * User is required to call avcodec_close() and avformat_free_context() to\n * clean up the allocation by avformat_new_stream().\n *\n * @param s media file handle\n * @param c If non-NULL, the AVCodecContext corresponding to the new stream\n * will be initialized to use this codec. This is needed for e.g. codec-specific\n * defaults to be set, so codec should be provided if it is known.\n *\n * @return newly created stream or NULL on error.\n */\nAVStream *avformat_new_stream(AVFormatContext *s, const AVCodec *c);\n\n/**\n * Wrap an existing array as stream side data.\n *\n * @param st stream\n * @param type side information type\n * @param data the side data array. It must be allocated with the av_malloc()\n *             family of functions. The ownership of the data is transferred to\n *             st.\n * @param size side information size\n * @return zero on success, a negative AVERROR code on failure. On failure,\n *         the stream is unchanged and the data remains owned by the caller.\n */\nint av_stream_add_side_data(AVStream *st, enum AVPacketSideDataType type,\n                            uint8_t *data, size_t size);\n\n/**\n * Allocate new information from stream.\n *\n * @param stream stream\n * @param type desired side information type\n * @param size side information size\n * @return pointer to fresh allocated data or NULL otherwise\n */\nuint8_t *av_stream_new_side_data(AVStream *stream,\n                                 enum AVPacketSideDataType type, int size);\n/**\n * Get side information from stream.\n *\n * @param stream stream\n * @param type desired side information type\n * @param size pointer for side information size to store (optional)\n * @return pointer to data if present or NULL otherwise\n */\nuint8_t *av_stream_get_side_data(const AVStream *stream,\n                                 enum AVPacketSideDataType type, int *size);\n\nAVProgram *av_new_program(AVFormatContext *s, int id);\n\n/**\n * @}\n */\n\n\n/**\n * Allocate an AVFormatContext for an output format.\n * avformat_free_context() can be used to free the context and\n * everything allocated by the framework within it.\n *\n * @param *ctx is set to the created format context, or to NULL in\n * case of failure\n * @param oformat format to use for allocating the context, if NULL\n * format_name and filename are used instead\n * @param format_name the name of output format to use for allocating the\n * context, if NULL filename is used instead\n * @param filename the name of the filename to use for allocating the\n * context, may be NULL\n * @return >= 0 in case of success, a negative AVERROR code in case of\n * failure\n */\nint avformat_alloc_output_context2(AVFormatContext **ctx, AVOutputFormat *oformat,\n                                   const char *format_name, const char *filename);\n\n/**\n * @addtogroup lavf_decoding\n * @{\n */\n\n/**\n * Find AVInputFormat based on the short name of the input format.\n */\nAVInputFormat *av_find_input_format(const char *short_name);\n\n/**\n * Guess the file format.\n *\n * @param pd        data to be probed\n * @param is_opened Whether the file is already opened; determines whether\n *                  demuxers with or without AVFMT_NOFILE are probed.\n */\nAVInputFormat *av_probe_input_format(AVProbeData *pd, int is_opened);\n\n/**\n * Guess the file format.\n *\n * @param pd        data to be probed\n * @param is_opened Whether the file is already opened; determines whether\n *                  demuxers with or without AVFMT_NOFILE are probed.\n * @param score_max A probe score larger that this is required to accept a\n *                  detection, the variable is set to the actual detection\n *                  score afterwards.\n *                  If the score is <= AVPROBE_SCORE_MAX / 4 it is recommended\n *                  to retry with a larger probe buffer.\n */\nAVInputFormat *av_probe_input_format2(AVProbeData *pd, int is_opened, int *score_max);\n\n/**\n * Guess the file format.\n *\n * @param is_opened Whether the file is already opened; determines whether\n *                  demuxers with or without AVFMT_NOFILE are probed.\n * @param score_ret The score of the best detection.\n */\nAVInputFormat *av_probe_input_format3(AVProbeData *pd, int is_opened, int *score_ret);\n\n/**\n * Probe a bytestream to determine the input format. Each time a probe returns\n * with a score that is too low, the probe buffer size is increased and another\n * attempt is made. When the maximum probe size is reached, the input format\n * with the highest score is returned.\n *\n * @param pb the bytestream to probe\n * @param fmt the input format is put here\n * @param url the url of the stream\n * @param logctx the log context\n * @param offset the offset within the bytestream to probe from\n * @param max_probe_size the maximum probe buffer size (zero for default)\n * @return the score in case of success, a negative value corresponding to an\n *         the maximal score is AVPROBE_SCORE_MAX\n * AVERROR code otherwise\n */\nint av_probe_input_buffer2(AVIOContext *pb, AVInputFormat **fmt,\n                           const char *url, void *logctx,\n                           unsigned int offset, unsigned int max_probe_size);\n\n/**\n * Like av_probe_input_buffer2() but returns 0 on success\n */\nint av_probe_input_buffer(AVIOContext *pb, AVInputFormat **fmt,\n                          const char *url, void *logctx,\n                          unsigned int offset, unsigned int max_probe_size);\n\n/**\n * Open an input stream and read the header. The codecs are not opened.\n * The stream must be closed with avformat_close_input().\n *\n * @param ps Pointer to user-supplied AVFormatContext (allocated by avformat_alloc_context).\n *           May be a pointer to NULL, in which case an AVFormatContext is allocated by this\n *           function and written into ps.\n *           Note that a user-supplied AVFormatContext will be freed on failure.\n * @param url URL of the stream to open.\n * @param fmt If non-NULL, this parameter forces a specific input format.\n *            Otherwise the format is autodetected.\n * @param options  A dictionary filled with AVFormatContext and demuxer-private options.\n *                 On return this parameter will be destroyed and replaced with a dict containing\n *                 options that were not found. May be NULL.\n *\n * @return 0 on success, a negative AVERROR on failure.\n *\n * @note If you want to use custom IO, preallocate the format context and set its pb field.\n */\nint avformat_open_input(AVFormatContext **ps, const char *url, AVInputFormat *fmt, AVDictionary **options);\n\nattribute_deprecated\nint av_demuxer_open(AVFormatContext *ic);\n\n/**\n * Read packets of a media file to get stream information. This\n * is useful for file formats with no headers such as MPEG. This\n * function also computes the real framerate in case of MPEG-2 repeat\n * frame mode.\n * The logical file position is not changed by this function;\n * examined packets may be buffered for later processing.\n *\n * @param ic media file handle\n * @param options  If non-NULL, an ic.nb_streams long array of pointers to\n *                 dictionaries, where i-th member contains options for\n *                 codec corresponding to i-th stream.\n *                 On return each dictionary will be filled with options that were not found.\n * @return >=0 if OK, AVERROR_xxx on error\n *\n * @note this function isn't guaranteed to open all the codecs, so\n *       options being non-empty at return is a perfectly normal behavior.\n *\n * @todo Let the user decide somehow what information is needed so that\n *       we do not waste time getting stuff the user does not need.\n */\nint avformat_find_stream_info(AVFormatContext *ic, AVDictionary **options);\n\n/**\n * Find the programs which belong to a given stream.\n *\n * @param ic    media file handle\n * @param last  the last found program, the search will start after this\n *              program, or from the beginning if it is NULL\n * @param s     stream index\n * @return the next program which belongs to s, NULL if no program is found or\n *         the last program is not among the programs of ic.\n */\nAVProgram *av_find_program_from_stream(AVFormatContext *ic, AVProgram *last, int s);\n\nvoid av_program_add_stream_index(AVFormatContext *ac, int progid, unsigned int idx);\n\n/**\n * Find the \"best\" stream in the file.\n * The best stream is determined according to various heuristics as the most\n * likely to be what the user expects.\n * If the decoder parameter is non-NULL, av_find_best_stream will find the\n * default decoder for the stream's codec; streams for which no decoder can\n * be found are ignored.\n *\n * @param ic                media file handle\n * @param type              stream type: video, audio, subtitles, etc.\n * @param wanted_stream_nb  user-requested stream number,\n *                          or -1 for automatic selection\n * @param related_stream    try to find a stream related (eg. in the same\n *                          program) to this one, or -1 if none\n * @param decoder_ret       if non-NULL, returns the decoder for the\n *                          selected stream\n * @param flags             flags; none are currently defined\n * @return  the non-negative stream number in case of success,\n *          AVERROR_STREAM_NOT_FOUND if no stream with the requested type\n *          could be found,\n *          AVERROR_DECODER_NOT_FOUND if streams were found but no decoder\n * @note  If av_find_best_stream returns successfully and decoder_ret is not\n *        NULL, then *decoder_ret is guaranteed to be set to a valid AVCodec.\n */\nint av_find_best_stream(AVFormatContext *ic,\n                        enum AVMediaType type,\n                        int wanted_stream_nb,\n                        int related_stream,\n                        AVCodec **decoder_ret,\n                        int flags);\n\n/**\n * Return the next frame of a stream.\n * This function returns what is stored in the file, and does not validate\n * that what is there are valid frames for the decoder. It will split what is\n * stored in the file into frames and return one for each call. It will not\n * omit invalid data between valid frames so as to give the decoder the maximum\n * information possible for decoding.\n *\n * If pkt->buf is NULL, then the packet is valid until the next\n * av_read_frame() or until avformat_close_input(). Otherwise the packet\n * is valid indefinitely. In both cases the packet must be freed with\n * av_packet_unref when it is no longer needed. For video, the packet contains\n * exactly one frame. For audio, it contains an integer number of frames if each\n * frame has a known fixed size (e.g. PCM or ADPCM data). If the audio frames\n * have a variable size (e.g. MPEG audio), then it contains one frame.\n *\n * pkt->pts, pkt->dts and pkt->duration are always set to correct\n * values in AVStream.time_base units (and guessed if the format cannot\n * provide them). pkt->pts can be AV_NOPTS_VALUE if the video format\n * has B-frames, so it is better to rely on pkt->dts if you do not\n * decompress the payload.\n *\n * @return 0 if OK, < 0 on error or end of file\n */\nint av_read_frame(AVFormatContext *s, AVPacket *pkt);\n\n/**\n * Seek to the keyframe at timestamp.\n * 'timestamp' in 'stream_index'.\n *\n * @param s media file handle\n * @param stream_index If stream_index is (-1), a default\n * stream is selected, and timestamp is automatically converted\n * from AV_TIME_BASE units to the stream specific time_base.\n * @param timestamp Timestamp in AVStream.time_base units\n *        or, if no stream is specified, in AV_TIME_BASE units.\n * @param flags flags which select direction and seeking mode\n * @return >= 0 on success\n */\nint av_seek_frame(AVFormatContext *s, int stream_index, int64_t timestamp,\n                  int flags);\n\n/**\n * Seek to timestamp ts.\n * Seeking will be done so that the point from which all active streams\n * can be presented successfully will be closest to ts and within min/max_ts.\n * Active streams are all streams that have AVStream.discard < AVDISCARD_ALL.\n *\n * If flags contain AVSEEK_FLAG_BYTE, then all timestamps are in bytes and\n * are the file position (this may not be supported by all demuxers).\n * If flags contain AVSEEK_FLAG_FRAME, then all timestamps are in frames\n * in the stream with stream_index (this may not be supported by all demuxers).\n * Otherwise all timestamps are in units of the stream selected by stream_index\n * or if stream_index is -1, in AV_TIME_BASE units.\n * If flags contain AVSEEK_FLAG_ANY, then non-keyframes are treated as\n * keyframes (this may not be supported by all demuxers).\n * If flags contain AVSEEK_FLAG_BACKWARD, it is ignored.\n *\n * @param s media file handle\n * @param stream_index index of the stream which is used as time base reference\n * @param min_ts smallest acceptable timestamp\n * @param ts target timestamp\n * @param max_ts largest acceptable timestamp\n * @param flags flags\n * @return >=0 on success, error code otherwise\n *\n * @note This is part of the new seek API which is still under construction.\n *       Thus do not use this yet. It may change at any time, do not expect\n *       ABI compatibility yet!\n */\nint avformat_seek_file(AVFormatContext *s, int stream_index, int64_t min_ts, int64_t ts, int64_t max_ts, int flags);\n\n/**\n * Discard all internally buffered data. This can be useful when dealing with\n * discontinuities in the byte stream. Generally works only with formats that\n * can resync. This includes headerless formats like MPEG-TS/TS but should also\n * work with NUT, Ogg and in a limited way AVI for example.\n *\n * The set of streams, the detected duration, stream parameters and codecs do\n * not change when calling this function. If you want a complete reset, it's\n * better to open a new AVFormatContext.\n *\n * This does not flush the AVIOContext (s->pb). If necessary, call\n * avio_flush(s->pb) before calling this function.\n *\n * @param s media file handle\n * @return >=0 on success, error code otherwise\n */\nint avformat_flush(AVFormatContext *s);\n\n/**\n * Start playing a network-based stream (e.g. RTSP stream) at the\n * current position.\n */\nint av_read_play(AVFormatContext *s);\n\n/**\n * Pause a network-based stream (e.g. RTSP stream).\n *\n * Use av_read_play() to resume it.\n */\nint av_read_pause(AVFormatContext *s);\n\n/**\n * Close an opened input AVFormatContext. Free it and all its contents\n * and set *s to NULL.\n */\nvoid avformat_close_input(AVFormatContext **s);\n/**\n * @}\n */\n\n#define AVSEEK_FLAG_BACKWARD 1 ///< seek backward\n#define AVSEEK_FLAG_BYTE     2 ///< seeking based on position in bytes\n#define AVSEEK_FLAG_ANY      4 ///< seek to any frame, even non-keyframes\n#define AVSEEK_FLAG_FRAME    8 ///< seeking based on frame number\n\n/**\n * @addtogroup lavf_encoding\n * @{\n */\n\n#define AVSTREAM_INIT_IN_WRITE_HEADER 0 ///< stream parameters initialized in avformat_write_header\n#define AVSTREAM_INIT_IN_INIT_OUTPUT  1 ///< stream parameters initialized in avformat_init_output\n\n/**\n * Allocate the stream private data and write the stream header to\n * an output media file.\n *\n * @param s Media file handle, must be allocated with avformat_alloc_context().\n *          Its oformat field must be set to the desired output format;\n *          Its pb field must be set to an already opened AVIOContext.\n * @param options  An AVDictionary filled with AVFormatContext and muxer-private options.\n *                 On return this parameter will be destroyed and replaced with a dict containing\n *                 options that were not found. May be NULL.\n *\n * @return AVSTREAM_INIT_IN_WRITE_HEADER on success if the codec had not already been fully initialized in avformat_init,\n *         AVSTREAM_INIT_IN_INIT_OUTPUT  on success if the codec had already been fully initialized in avformat_init,\n *         negative AVERROR on failure.\n *\n * @see av_opt_find, av_dict_set, avio_open, av_oformat_next, avformat_init_output.\n */\nav_warn_unused_result\nint avformat_write_header(AVFormatContext *s, AVDictionary **options);\n\n/**\n * Allocate the stream private data and initialize the codec, but do not write the header.\n * May optionally be used before avformat_write_header to initialize stream parameters\n * before actually writing the header.\n * If using this function, do not pass the same options to avformat_write_header.\n *\n * @param s Media file handle, must be allocated with avformat_alloc_context().\n *          Its oformat field must be set to the desired output format;\n *          Its pb field must be set to an already opened AVIOContext.\n * @param options  An AVDictionary filled with AVFormatContext and muxer-private options.\n *                 On return this parameter will be destroyed and replaced with a dict containing\n *                 options that were not found. May be NULL.\n *\n * @return AVSTREAM_INIT_IN_WRITE_HEADER on success if the codec requires avformat_write_header to fully initialize,\n *         AVSTREAM_INIT_IN_INIT_OUTPUT  on success if the codec has been fully initialized,\n *         negative AVERROR on failure.\n *\n * @see av_opt_find, av_dict_set, avio_open, av_oformat_next, avformat_write_header.\n */\nav_warn_unused_result\nint avformat_init_output(AVFormatContext *s, AVDictionary **options);\n\n/**\n * Write a packet to an output media file.\n *\n * This function passes the packet directly to the muxer, without any buffering\n * or reordering. The caller is responsible for correctly interleaving the\n * packets if the format requires it. Callers that want libavformat to handle\n * the interleaving should call av_interleaved_write_frame() instead of this\n * function.\n *\n * @param s media file handle\n * @param pkt The packet containing the data to be written. Note that unlike\n *            av_interleaved_write_frame(), this function does not take\n *            ownership of the packet passed to it (though some muxers may make\n *            an internal reference to the input packet).\n *            <br>\n *            This parameter can be NULL (at any time, not just at the end), in\n *            order to immediately flush data buffered within the muxer, for\n *            muxers that buffer up data internally before writing it to the\n *            output.\n *            <br>\n *            Packet's @ref AVPacket.stream_index \"stream_index\" field must be\n *            set to the index of the corresponding stream in @ref\n *            AVFormatContext.streams \"s->streams\".\n *            <br>\n *            The timestamps (@ref AVPacket.pts \"pts\", @ref AVPacket.dts \"dts\")\n *            must be set to correct values in the stream's timebase (unless the\n *            output format is flagged with the AVFMT_NOTIMESTAMPS flag, then\n *            they can be set to AV_NOPTS_VALUE).\n *            The dts for subsequent packets passed to this function must be strictly\n *            increasing when compared in their respective timebases (unless the\n *            output format is flagged with the AVFMT_TS_NONSTRICT, then they\n *            merely have to be nondecreasing).  @ref AVPacket.duration\n *            \"duration\") should also be set if known.\n * @return < 0 on error, = 0 if OK, 1 if flushed and there is no more data to flush\n *\n * @see av_interleaved_write_frame()\n */\nint av_write_frame(AVFormatContext *s, AVPacket *pkt);\n\n/**\n * Write a packet to an output media file ensuring correct interleaving.\n *\n * This function will buffer the packets internally as needed to make sure the\n * packets in the output file are properly interleaved in the order of\n * increasing dts. Callers doing their own interleaving should call\n * av_write_frame() instead of this function.\n *\n * Using this function instead of av_write_frame() can give muxers advance\n * knowledge of future packets, improving e.g. the behaviour of the mp4\n * muxer for VFR content in fragmenting mode.\n *\n * @param s media file handle\n * @param pkt The packet containing the data to be written.\n *            <br>\n *            If the packet is reference-counted, this function will take\n *            ownership of this reference and unreference it later when it sees\n *            fit.\n *            The caller must not access the data through this reference after\n *            this function returns. If the packet is not reference-counted,\n *            libavformat will make a copy.\n *            <br>\n *            This parameter can be NULL (at any time, not just at the end), to\n *            flush the interleaving queues.\n *            <br>\n *            Packet's @ref AVPacket.stream_index \"stream_index\" field must be\n *            set to the index of the corresponding stream in @ref\n *            AVFormatContext.streams \"s->streams\".\n *            <br>\n *            The timestamps (@ref AVPacket.pts \"pts\", @ref AVPacket.dts \"dts\")\n *            must be set to correct values in the stream's timebase (unless the\n *            output format is flagged with the AVFMT_NOTIMESTAMPS flag, then\n *            they can be set to AV_NOPTS_VALUE).\n *            The dts for subsequent packets in one stream must be strictly\n *            increasing (unless the output format is flagged with the\n *            AVFMT_TS_NONSTRICT, then they merely have to be nondecreasing).\n *            @ref AVPacket.duration \"duration\") should also be set if known.\n *\n * @return 0 on success, a negative AVERROR on error. Libavformat will always\n *         take care of freeing the packet, even if this function fails.\n *\n * @see av_write_frame(), AVFormatContext.max_interleave_delta\n */\nint av_interleaved_write_frame(AVFormatContext *s, AVPacket *pkt);\n\n/**\n * Write an uncoded frame to an output media file.\n *\n * The frame must be correctly interleaved according to the container\n * specification; if not, then av_interleaved_write_frame() must be used.\n *\n * See av_interleaved_write_frame() for details.\n */\nint av_write_uncoded_frame(AVFormatContext *s, int stream_index,\n                           AVFrame *frame);\n\n/**\n * Write an uncoded frame to an output media file.\n *\n * If the muxer supports it, this function makes it possible to write an AVFrame\n * structure directly, without encoding it into a packet.\n * It is mostly useful for devices and similar special muxers that use raw\n * video or PCM data and will not serialize it into a byte stream.\n *\n * To test whether it is possible to use it with a given muxer and stream,\n * use av_write_uncoded_frame_query().\n *\n * The caller gives up ownership of the frame and must not access it\n * afterwards.\n *\n * @return  >=0 for success, a negative code on error\n */\nint av_interleaved_write_uncoded_frame(AVFormatContext *s, int stream_index,\n                                       AVFrame *frame);\n\n/**\n * Test whether a muxer supports uncoded frame.\n *\n * @return  >=0 if an uncoded frame can be written to that muxer and stream,\n *          <0 if not\n */\nint av_write_uncoded_frame_query(AVFormatContext *s, int stream_index);\n\n/**\n * Write the stream trailer to an output media file and free the\n * file private data.\n *\n * May only be called after a successful call to avformat_write_header.\n *\n * @param s media file handle\n * @return 0 if OK, AVERROR_xxx on error\n */\nint av_write_trailer(AVFormatContext *s);\n\n/**\n * Return the output format in the list of registered output formats\n * which best matches the provided parameters, or return NULL if\n * there is no match.\n *\n * @param short_name if non-NULL checks if short_name matches with the\n * names of the registered formats\n * @param filename if non-NULL checks if filename terminates with the\n * extensions of the registered formats\n * @param mime_type if non-NULL checks if mime_type matches with the\n * MIME type of the registered formats\n */\nAVOutputFormat *av_guess_format(const char *short_name,\n                                const char *filename,\n                                const char *mime_type);\n\n/**\n * Guess the codec ID based upon muxer and filename.\n */\nenum AVCodecID av_guess_codec(AVOutputFormat *fmt, const char *short_name,\n                            const char *filename, const char *mime_type,\n                            enum AVMediaType type);\n\n/**\n * Get timing information for the data currently output.\n * The exact meaning of \"currently output\" depends on the format.\n * It is mostly relevant for devices that have an internal buffer and/or\n * work in real time.\n * @param s          media file handle\n * @param stream     stream in the media file\n * @param[out] dts   DTS of the last packet output for the stream, in stream\n *                   time_base units\n * @param[out] wall  absolute time when that packet whas output,\n *                   in microsecond\n * @return  0 if OK, AVERROR(ENOSYS) if the format does not support it\n * Note: some formats or devices may not allow to measure dts and wall\n * atomically.\n */\nint av_get_output_timestamp(struct AVFormatContext *s, int stream,\n                            int64_t *dts, int64_t *wall);\n\n\n/**\n * @}\n */\n\n\n/**\n * @defgroup lavf_misc Utility functions\n * @ingroup libavf\n * @{\n *\n * Miscellaneous utility functions related to both muxing and demuxing\n * (or neither).\n */\n\n/**\n * Send a nice hexadecimal dump of a buffer to the specified file stream.\n *\n * @param f The file stream pointer where the dump should be sent to.\n * @param buf buffer\n * @param size buffer size\n *\n * @see av_hex_dump_log, av_pkt_dump2, av_pkt_dump_log2\n */\nvoid av_hex_dump(FILE *f, const uint8_t *buf, int size);\n\n/**\n * Send a nice hexadecimal dump of a buffer to the log.\n *\n * @param avcl A pointer to an arbitrary struct of which the first field is a\n * pointer to an AVClass struct.\n * @param level The importance level of the message, lower values signifying\n * higher importance.\n * @param buf buffer\n * @param size buffer size\n *\n * @see av_hex_dump, av_pkt_dump2, av_pkt_dump_log2\n */\nvoid av_hex_dump_log(void *avcl, int level, const uint8_t *buf, int size);\n\n/**\n * Send a nice dump of a packet to the specified file stream.\n *\n * @param f The file stream pointer where the dump should be sent to.\n * @param pkt packet to dump\n * @param dump_payload True if the payload must be displayed, too.\n * @param st AVStream that the packet belongs to\n */\nvoid av_pkt_dump2(FILE *f, const AVPacket *pkt, int dump_payload, const AVStream *st);\n\n\n/**\n * Send a nice dump of a packet to the log.\n *\n * @param avcl A pointer to an arbitrary struct of which the first field is a\n * pointer to an AVClass struct.\n * @param level The importance level of the message, lower values signifying\n * higher importance.\n * @param pkt packet to dump\n * @param dump_payload True if the payload must be displayed, too.\n * @param st AVStream that the packet belongs to\n */\nvoid av_pkt_dump_log2(void *avcl, int level, const AVPacket *pkt, int dump_payload,\n                      const AVStream *st);\n\n/**\n * Get the AVCodecID for the given codec tag tag.\n * If no codec id is found returns AV_CODEC_ID_NONE.\n *\n * @param tags list of supported codec_id-codec_tag pairs, as stored\n * in AVInputFormat.codec_tag and AVOutputFormat.codec_tag\n * @param tag  codec tag to match to a codec ID\n */\nenum AVCodecID av_codec_get_id(const struct AVCodecTag * const *tags, unsigned int tag);\n\n/**\n * Get the codec tag for the given codec id id.\n * If no codec tag is found returns 0.\n *\n * @param tags list of supported codec_id-codec_tag pairs, as stored\n * in AVInputFormat.codec_tag and AVOutputFormat.codec_tag\n * @param id   codec ID to match to a codec tag\n */\nunsigned int av_codec_get_tag(const struct AVCodecTag * const *tags, enum AVCodecID id);\n\n/**\n * Get the codec tag for the given codec id.\n *\n * @param tags list of supported codec_id - codec_tag pairs, as stored\n * in AVInputFormat.codec_tag and AVOutputFormat.codec_tag\n * @param id codec id that should be searched for in the list\n * @param tag A pointer to the found tag\n * @return 0 if id was not found in tags, > 0 if it was found\n */\nint av_codec_get_tag2(const struct AVCodecTag * const *tags, enum AVCodecID id,\n                      unsigned int *tag);\n\nint av_find_default_stream_index(AVFormatContext *s);\n\n/**\n * Get the index for a specific timestamp.\n *\n * @param st        stream that the timestamp belongs to\n * @param timestamp timestamp to retrieve the index for\n * @param flags if AVSEEK_FLAG_BACKWARD then the returned index will correspond\n *                 to the timestamp which is <= the requested one, if backward\n *                 is 0, then it will be >=\n *              if AVSEEK_FLAG_ANY seek to any frame, only keyframes otherwise\n * @return < 0 if no such timestamp could be found\n */\nint av_index_search_timestamp(AVStream *st, int64_t timestamp, int flags);\n\n/**\n * Add an index entry into a sorted list. Update the entry if the list\n * already contains it.\n *\n * @param timestamp timestamp in the time base of the given stream\n */\nint av_add_index_entry(AVStream *st, int64_t pos, int64_t timestamp,\n                       int size, int distance, int flags);\n\n\n/**\n * Split a URL string into components.\n *\n * The pointers to buffers for storing individual components may be null,\n * in order to ignore that component. Buffers for components not found are\n * set to empty strings. If the port is not found, it is set to a negative\n * value.\n *\n * @param proto the buffer for the protocol\n * @param proto_size the size of the proto buffer\n * @param authorization the buffer for the authorization\n * @param authorization_size the size of the authorization buffer\n * @param hostname the buffer for the host name\n * @param hostname_size the size of the hostname buffer\n * @param port_ptr a pointer to store the port number in\n * @param path the buffer for the path\n * @param path_size the size of the path buffer\n * @param url the URL to split\n */\nvoid av_url_split(char *proto,         int proto_size,\n                  char *authorization, int authorization_size,\n                  char *hostname,      int hostname_size,\n                  int *port_ptr,\n                  char *path,          int path_size,\n                  const char *url);\n\n\n/**\n * Print detailed information about the input or output format, such as\n * duration, bitrate, streams, container, programs, metadata, side data,\n * codec and time base.\n *\n * @param ic        the context to analyze\n * @param index     index of the stream to dump information about\n * @param url       the URL to print, such as source or destination file\n * @param is_output Select whether the specified context is an input(0) or output(1)\n */\nvoid av_dump_format(AVFormatContext *ic,\n                    int index,\n                    const char *url,\n                    int is_output);\n\n\n#define AV_FRAME_FILENAME_FLAGS_MULTIPLE 1 ///< Allow multiple %d\n\n/**\n * Return in 'buf' the path with '%d' replaced by a number.\n *\n * Also handles the '%0nd' format where 'n' is the total number\n * of digits and '%%'.\n *\n * @param buf destination buffer\n * @param buf_size destination buffer size\n * @param path numbered sequence string\n * @param number frame number\n * @param flags AV_FRAME_FILENAME_FLAGS_*\n * @return 0 if OK, -1 on format error\n */\nint av_get_frame_filename2(char *buf, int buf_size,\n                          const char *path, int number, int flags);\n\nint av_get_frame_filename(char *buf, int buf_size,\n                          const char *path, int number);\n\n/**\n * Check whether filename actually is a numbered sequence generator.\n *\n * @param filename possible numbered sequence string\n * @return 1 if a valid numbered sequence string, 0 otherwise\n */\nint av_filename_number_test(const char *filename);\n\n/**\n * Generate an SDP for an RTP session.\n *\n * Note, this overwrites the id values of AVStreams in the muxer contexts\n * for getting unique dynamic payload types.\n *\n * @param ac array of AVFormatContexts describing the RTP streams. If the\n *           array is composed by only one context, such context can contain\n *           multiple AVStreams (one AVStream per RTP stream). Otherwise,\n *           all the contexts in the array (an AVCodecContext per RTP stream)\n *           must contain only one AVStream.\n * @param n_files number of AVCodecContexts contained in ac\n * @param buf buffer where the SDP will be stored (must be allocated by\n *            the caller)\n * @param size the size of the buffer\n * @return 0 if OK, AVERROR_xxx on error\n */\nint av_sdp_create(AVFormatContext *ac[], int n_files, char *buf, int size);\n\n/**\n * Return a positive value if the given filename has one of the given\n * extensions, 0 otherwise.\n *\n * @param filename   file name to check against the given extensions\n * @param extensions a comma-separated list of filename extensions\n */\nint av_match_ext(const char *filename, const char *extensions);\n\n/**\n * Test if the given container can store a codec.\n *\n * @param ofmt           container to check for compatibility\n * @param codec_id       codec to potentially store in container\n * @param std_compliance standards compliance level, one of FF_COMPLIANCE_*\n *\n * @return 1 if codec with ID codec_id can be stored in ofmt, 0 if it cannot.\n *         A negative number if this information is not available.\n */\nint avformat_query_codec(const AVOutputFormat *ofmt, enum AVCodecID codec_id,\n                         int std_compliance);\n\n/**\n * @defgroup riff_fourcc RIFF FourCCs\n * @{\n * Get the tables mapping RIFF FourCCs to libavcodec AVCodecIDs. The tables are\n * meant to be passed to av_codec_get_id()/av_codec_get_tag() as in the\n * following code:\n * @code\n * uint32_t tag = MKTAG('H', '2', '6', '4');\n * const struct AVCodecTag *table[] = { avformat_get_riff_video_tags(), 0 };\n * enum AVCodecID id = av_codec_get_id(table, tag);\n * @endcode\n */\n/**\n * @return the table mapping RIFF FourCCs for video to libavcodec AVCodecID.\n */\nconst struct AVCodecTag *avformat_get_riff_video_tags(void);\n/**\n * @return the table mapping RIFF FourCCs for audio to AVCodecID.\n */\nconst struct AVCodecTag *avformat_get_riff_audio_tags(void);\n/**\n * @return the table mapping MOV FourCCs for video to libavcodec AVCodecID.\n */\nconst struct AVCodecTag *avformat_get_mov_video_tags(void);\n/**\n * @return the table mapping MOV FourCCs for audio to AVCodecID.\n */\nconst struct AVCodecTag *avformat_get_mov_audio_tags(void);\n\n/**\n * @}\n */\n\n/**\n * Guess the sample aspect ratio of a frame, based on both the stream and the\n * frame aspect ratio.\n *\n * Since the frame aspect ratio is set by the codec but the stream aspect ratio\n * is set by the demuxer, these two may not be equal. This function tries to\n * return the value that you should use if you would like to display the frame.\n *\n * Basic logic is to use the stream aspect ratio if it is set to something sane\n * otherwise use the frame aspect ratio. This way a container setting, which is\n * usually easy to modify can override the coded value in the frames.\n *\n * @param format the format context which the stream is part of\n * @param stream the stream which the frame is part of\n * @param frame the frame with the aspect ratio to be determined\n * @return the guessed (valid) sample_aspect_ratio, 0/1 if no idea\n */\nAVRational av_guess_sample_aspect_ratio(AVFormatContext *format, AVStream *stream, AVFrame *frame);\n\n/**\n * Guess the frame rate, based on both the container and codec information.\n *\n * @param ctx the format context which the stream is part of\n * @param stream the stream which the frame is part of\n * @param frame the frame for which the frame rate should be determined, may be NULL\n * @return the guessed (valid) frame rate, 0/1 if no idea\n */\nAVRational av_guess_frame_rate(AVFormatContext *ctx, AVStream *stream, AVFrame *frame);\n\n/**\n * Check if the stream st contained in s is matched by the stream specifier\n * spec.\n *\n * See the \"stream specifiers\" chapter in the documentation for the syntax\n * of spec.\n *\n * @return  >0 if st is matched by spec;\n *          0  if st is not matched by spec;\n *          AVERROR code if spec is invalid\n *\n * @note  A stream specifier can match several streams in the format.\n */\nint avformat_match_stream_specifier(AVFormatContext *s, AVStream *st,\n                                    const char *spec);\n\nint avformat_queue_attached_pictures(AVFormatContext *s);\n\n#if FF_API_OLD_BSF\n/**\n * Apply a list of bitstream filters to a packet.\n *\n * @param codec AVCodecContext, usually from an AVStream\n * @param pkt the packet to apply filters to. If, on success, the returned\n *        packet has size == 0 and side_data_elems == 0, it indicates that\n *        the packet should be dropped\n * @param bsfc a NULL-terminated list of filters to apply\n * @return  >=0 on success;\n *          AVERROR code on failure\n */\nattribute_deprecated\nint av_apply_bitstream_filters(AVCodecContext *codec, AVPacket *pkt,\n                               AVBitStreamFilterContext *bsfc);\n#endif\n\nenum AVTimebaseSource {\n    AVFMT_TBCF_AUTO = -1,\n    AVFMT_TBCF_DECODER,\n    AVFMT_TBCF_DEMUXER,\n#if FF_API_R_FRAME_RATE\n    AVFMT_TBCF_R_FRAMERATE,\n#endif\n};\n\n/**\n * Transfer internal timing information from one stream to another.\n *\n * This function is useful when doing stream copy.\n *\n * @param ofmt     target output format for ost\n * @param ost      output stream which needs timings copy and adjustments\n * @param ist      reference input stream to copy timings from\n * @param copy_tb  define from where the stream codec timebase needs to be imported\n */\nint avformat_transfer_internal_stream_timing_info(const AVOutputFormat *ofmt,\n                                                  AVStream *ost, const AVStream *ist,\n                                                  enum AVTimebaseSource copy_tb);\n\n/**\n * Get the internal codec timebase from a stream.\n *\n * @param st  input stream to extract the timebase from\n */\nAVRational av_stream_get_codec_timebase(const AVStream *st);\n\n/**\n * @}\n */\n\n#endif /* AVFORMAT_AVFORMAT_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavformat/avio.h",
    "content": "/*\n * copyright (c) 2001 Fabrice Bellard\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n#ifndef AVFORMAT_AVIO_H\n#define AVFORMAT_AVIO_H\n\n/**\n * @file\n * @ingroup lavf_io\n * Buffered I/O operations\n */\n\n#include <stdint.h>\n\n#include \"libavutil/common.h\"\n#include \"libavutil/dict.h\"\n#include \"libavutil/log.h\"\n\n#include \"libavformat/version.h\"\n\n/**\n * Seeking works like for a local file.\n */\n#define AVIO_SEEKABLE_NORMAL (1 << 0)\n\n/**\n * Seeking by timestamp with avio_seek_time() is possible.\n */\n#define AVIO_SEEKABLE_TIME   (1 << 1)\n\n/**\n * Callback for checking whether to abort blocking functions.\n * AVERROR_EXIT is returned in this case by the interrupted\n * function. During blocking operations, callback is called with\n * opaque as parameter. If the callback returns 1, the\n * blocking operation will be aborted.\n *\n * No members can be added to this struct without a major bump, if\n * new elements have been added after this struct in AVFormatContext\n * or AVIOContext.\n */\ntypedef struct AVIOInterruptCB {\n    int (*callback)(void*);\n    void *opaque;\n} AVIOInterruptCB;\n\n/**\n * Directory entry types.\n */\nenum AVIODirEntryType {\n    AVIO_ENTRY_UNKNOWN,\n    AVIO_ENTRY_BLOCK_DEVICE,\n    AVIO_ENTRY_CHARACTER_DEVICE,\n    AVIO_ENTRY_DIRECTORY,\n    AVIO_ENTRY_NAMED_PIPE,\n    AVIO_ENTRY_SYMBOLIC_LINK,\n    AVIO_ENTRY_SOCKET,\n    AVIO_ENTRY_FILE,\n    AVIO_ENTRY_SERVER,\n    AVIO_ENTRY_SHARE,\n    AVIO_ENTRY_WORKGROUP,\n};\n\n/**\n * Describes single entry of the directory.\n *\n * Only name and type fields are guaranteed be set.\n * Rest of fields are protocol or/and platform dependent and might be unknown.\n */\ntypedef struct AVIODirEntry {\n    char *name;                           /**< Filename */\n    int type;                             /**< Type of the entry */\n    int utf8;                             /**< Set to 1 when name is encoded with UTF-8, 0 otherwise.\n                                               Name can be encoded with UTF-8 even though 0 is set. */\n    int64_t size;                         /**< File size in bytes, -1 if unknown. */\n    int64_t modification_timestamp;       /**< Time of last modification in microseconds since unix\n                                               epoch, -1 if unknown. */\n    int64_t access_timestamp;             /**< Time of last access in microseconds since unix epoch,\n                                               -1 if unknown. */\n    int64_t status_change_timestamp;      /**< Time of last status change in microseconds since unix\n                                               epoch, -1 if unknown. */\n    int64_t user_id;                      /**< User ID of owner, -1 if unknown. */\n    int64_t group_id;                     /**< Group ID of owner, -1 if unknown. */\n    int64_t filemode;                     /**< Unix file mode, -1 if unknown. */\n} AVIODirEntry;\n\ntypedef struct AVIODirContext {\n    struct URLContext *url_context;\n} AVIODirContext;\n\n/**\n * Different data types that can be returned via the AVIO\n * write_data_type callback.\n */\nenum AVIODataMarkerType {\n    /**\n     * Header data; this needs to be present for the stream to be decodeable.\n     */\n    AVIO_DATA_MARKER_HEADER,\n    /**\n     * A point in the output bytestream where a decoder can start decoding\n     * (i.e. a keyframe). A demuxer/decoder given the data flagged with\n     * AVIO_DATA_MARKER_HEADER, followed by any AVIO_DATA_MARKER_SYNC_POINT,\n     * should give decodeable results.\n     */\n    AVIO_DATA_MARKER_SYNC_POINT,\n    /**\n     * A point in the output bytestream where a demuxer can start parsing\n     * (for non self synchronizing bytestream formats). That is, any\n     * non-keyframe packet start point.\n     */\n    AVIO_DATA_MARKER_BOUNDARY_POINT,\n    /**\n     * This is any, unlabelled data. It can either be a muxer not marking\n     * any positions at all, it can be an actual boundary/sync point\n     * that the muxer chooses not to mark, or a later part of a packet/fragment\n     * that is cut into multiple write callbacks due to limited IO buffer size.\n     */\n    AVIO_DATA_MARKER_UNKNOWN,\n    /**\n     * Trailer data, which doesn't contain actual content, but only for\n     * finalizing the output file.\n     */\n    AVIO_DATA_MARKER_TRAILER,\n    /**\n     * A point in the output bytestream where the underlying AVIOContext might\n     * flush the buffer depending on latency or buffering requirements. Typically\n     * means the end of a packet.\n     */\n    AVIO_DATA_MARKER_FLUSH_POINT,\n};\n\n/**\n * Bytestream IO Context.\n * New fields can be added to the end with minor version bumps.\n * Removal, reordering and changes to existing fields require a major\n * version bump.\n * sizeof(AVIOContext) must not be used outside libav*.\n *\n * @note None of the function pointers in AVIOContext should be called\n *       directly, they should only be set by the client application\n *       when implementing custom I/O. Normally these are set to the\n *       function pointers specified in avio_alloc_context()\n */\ntypedef struct AVIOContext {\n    /**\n     * A class for private options.\n     *\n     * If this AVIOContext is created by avio_open2(), av_class is set and\n     * passes the options down to protocols.\n     *\n     * If this AVIOContext is manually allocated, then av_class may be set by\n     * the caller.\n     *\n     * warning -- this field can be NULL, be sure to not pass this AVIOContext\n     * to any av_opt_* functions in that case.\n     */\n    const AVClass *av_class;\n\n    /*\n     * The following shows the relationship between buffer, buf_ptr,\n     * buf_ptr_max, buf_end, buf_size, and pos, when reading and when writing\n     * (since AVIOContext is used for both):\n     *\n     **********************************************************************************\n     *                                   READING\n     **********************************************************************************\n     *\n     *                            |              buffer_size              |\n     *                            |---------------------------------------|\n     *                            |                                       |\n     *\n     *                         buffer          buf_ptr       buf_end\n     *                            +---------------+-----------------------+\n     *                            |/ / / / / / / /|/ / / / / / /|         |\n     *  read buffer:              |/ / consumed / | to be read /|         |\n     *                            |/ / / / / / / /|/ / / / / / /|         |\n     *                            +---------------+-----------------------+\n     *\n     *                                                         pos\n     *              +-------------------------------------------+-----------------+\n     *  input file: |                                           |                 |\n     *              +-------------------------------------------+-----------------+\n     *\n     *\n     **********************************************************************************\n     *                                   WRITING\n     **********************************************************************************\n     *\n     *                             |          buffer_size                 |\n     *                             |--------------------------------------|\n     *                             |                                      |\n     *\n     *                                                buf_ptr_max\n     *                          buffer                 (buf_ptr)       buf_end\n     *                             +-----------------------+--------------+\n     *                             |/ / / / / / / / / / / /|              |\n     *  write buffer:              | / / to be flushed / / |              |\n     *                             |/ / / / / / / / / / / /|              |\n     *                             +-----------------------+--------------+\n     *                               buf_ptr can be in this\n     *                               due to a backward seek\n     *\n     *                            pos\n     *               +-------------+----------------------------------------------+\n     *  output file: |             |                                              |\n     *               +-------------+----------------------------------------------+\n     *\n     */\n    unsigned char *buffer;  /**< Start of the buffer. */\n    int buffer_size;        /**< Maximum buffer size */\n    unsigned char *buf_ptr; /**< Current position in the buffer */\n    unsigned char *buf_end; /**< End of the data, may be less than\n                                 buffer+buffer_size if the read function returned\n                                 less data than requested, e.g. for streams where\n                                 no more data has been received yet. */\n    void *opaque;           /**< A private pointer, passed to the read/write/seek/...\n                                 functions. */\n    int (*read_packet)(void *opaque, uint8_t *buf, int buf_size);\n    int (*write_packet)(void *opaque, uint8_t *buf, int buf_size);\n    int64_t (*seek)(void *opaque, int64_t offset, int whence);\n    int64_t pos;            /**< position in the file of the current buffer */\n    int eof_reached;        /**< true if eof reached */\n    int write_flag;         /**< true if open for writing */\n    int max_packet_size;\n    unsigned long checksum;\n    unsigned char *checksum_ptr;\n    unsigned long (*update_checksum)(unsigned long checksum, const uint8_t *buf, unsigned int size);\n    int error;              /**< contains the error code or 0 if no error happened */\n    /**\n     * Pause or resume playback for network streaming protocols - e.g. MMS.\n     */\n    int (*read_pause)(void *opaque, int pause);\n    /**\n     * Seek to a given timestamp in stream with the specified stream_index.\n     * Needed for some network streaming protocols which don't support seeking\n     * to byte position.\n     */\n    int64_t (*read_seek)(void *opaque, int stream_index,\n                         int64_t timestamp, int flags);\n    /**\n     * A combination of AVIO_SEEKABLE_ flags or 0 when the stream is not seekable.\n     */\n    int seekable;\n\n    /**\n     * max filesize, used to limit allocations\n     * This field is internal to libavformat and access from outside is not allowed.\n     */\n    int64_t maxsize;\n\n    /**\n     * avio_read and avio_write should if possible be satisfied directly\n     * instead of going through a buffer, and avio_seek will always\n     * call the underlying seek function directly.\n     */\n    int direct;\n\n    /**\n     * Bytes read statistic\n     * This field is internal to libavformat and access from outside is not allowed.\n     */\n    int64_t bytes_read;\n\n    /**\n     * seek statistic\n     * This field is internal to libavformat and access from outside is not allowed.\n     */\n    int seek_count;\n\n    /**\n     * writeout statistic\n     * This field is internal to libavformat and access from outside is not allowed.\n     */\n    int writeout_count;\n\n    /**\n     * Original buffer size\n     * used internally after probing and ensure seekback to reset the buffer size\n     * This field is internal to libavformat and access from outside is not allowed.\n     */\n    int orig_buffer_size;\n\n    /**\n     * Threshold to favor readahead over seek.\n     * This is current internal only, do not use from outside.\n     */\n    int short_seek_threshold;\n\n    /**\n     * ',' separated list of allowed protocols.\n     */\n    const char *protocol_whitelist;\n\n    /**\n     * ',' separated list of disallowed protocols.\n     */\n    const char *protocol_blacklist;\n\n    /**\n     * A callback that is used instead of write_packet.\n     */\n    int (*write_data_type)(void *opaque, uint8_t *buf, int buf_size,\n                           enum AVIODataMarkerType type, int64_t time);\n    /**\n     * If set, don't call write_data_type separately for AVIO_DATA_MARKER_BOUNDARY_POINT,\n     * but ignore them and treat them as AVIO_DATA_MARKER_UNKNOWN (to avoid needlessly\n     * small chunks of data returned from the callback).\n     */\n    int ignore_boundary_point;\n\n    /**\n     * Internal, not meant to be used from outside of AVIOContext.\n     */\n    enum AVIODataMarkerType current_type;\n    int64_t last_time;\n\n    /**\n     * A callback that is used instead of short_seek_threshold.\n     * This is current internal only, do not use from outside.\n     */\n    int (*short_seek_get)(void *opaque);\n\n    int64_t written;\n\n    /**\n     * Maximum reached position before a backward seek in the write buffer,\n     * used keeping track of already written data for a later flush.\n     */\n    unsigned char *buf_ptr_max;\n\n    /**\n     * Try to buffer at least this amount of data before flushing it\n     */\n    int min_packet_size;\n} AVIOContext;\n\n/**\n * Return the name of the protocol that will handle the passed URL.\n *\n * NULL is returned if no protocol could be found for the given URL.\n *\n * @return Name of the protocol or NULL.\n */\nconst char *avio_find_protocol_name(const char *url);\n\n/**\n * Return AVIO_FLAG_* access flags corresponding to the access permissions\n * of the resource in url, or a negative value corresponding to an\n * AVERROR code in case of failure. The returned access flags are\n * masked by the value in flags.\n *\n * @note This function is intrinsically unsafe, in the sense that the\n * checked resource may change its existence or permission status from\n * one call to another. Thus you should not trust the returned value,\n * unless you are sure that no other processes are accessing the\n * checked resource.\n */\nint avio_check(const char *url, int flags);\n\n/**\n * Move or rename a resource.\n *\n * @note url_src and url_dst should share the same protocol and authority.\n *\n * @param url_src url to resource to be moved\n * @param url_dst new url to resource if the operation succeeded\n * @return >=0 on success or negative on error.\n */\nint avpriv_io_move(const char *url_src, const char *url_dst);\n\n/**\n * Delete a resource.\n *\n * @param url resource to be deleted.\n * @return >=0 on success or negative on error.\n */\nint avpriv_io_delete(const char *url);\n\n/**\n * Open directory for reading.\n *\n * @param s       directory read context. Pointer to a NULL pointer must be passed.\n * @param url     directory to be listed.\n * @param options A dictionary filled with protocol-private options. On return\n *                this parameter will be destroyed and replaced with a dictionary\n *                containing options that were not found. May be NULL.\n * @return >=0 on success or negative on error.\n */\nint avio_open_dir(AVIODirContext **s, const char *url, AVDictionary **options);\n\n/**\n * Get next directory entry.\n *\n * Returned entry must be freed with avio_free_directory_entry(). In particular\n * it may outlive AVIODirContext.\n *\n * @param s         directory read context.\n * @param[out] next next entry or NULL when no more entries.\n * @return >=0 on success or negative on error. End of list is not considered an\n *             error.\n */\nint avio_read_dir(AVIODirContext *s, AVIODirEntry **next);\n\n/**\n * Close directory.\n *\n * @note Entries created using avio_read_dir() are not deleted and must be\n * freeded with avio_free_directory_entry().\n *\n * @param s         directory read context.\n * @return >=0 on success or negative on error.\n */\nint avio_close_dir(AVIODirContext **s);\n\n/**\n * Free entry allocated by avio_read_dir().\n *\n * @param entry entry to be freed.\n */\nvoid avio_free_directory_entry(AVIODirEntry **entry);\n\n/**\n * Allocate and initialize an AVIOContext for buffered I/O. It must be later\n * freed with avio_context_free().\n *\n * @param buffer Memory block for input/output operations via AVIOContext.\n *        The buffer must be allocated with av_malloc() and friends.\n *        It may be freed and replaced with a new buffer by libavformat.\n *        AVIOContext.buffer holds the buffer currently in use,\n *        which must be later freed with av_free().\n * @param buffer_size The buffer size is very important for performance.\n *        For protocols with fixed blocksize it should be set to this blocksize.\n *        For others a typical size is a cache page, e.g. 4kb.\n * @param write_flag Set to 1 if the buffer should be writable, 0 otherwise.\n * @param opaque An opaque pointer to user-specific data.\n * @param read_packet  A function for refilling the buffer, may be NULL.\n *                     For stream protocols, must never return 0 but rather\n *                     a proper AVERROR code.\n * @param write_packet A function for writing the buffer contents, may be NULL.\n *        The function may not change the input buffers content.\n * @param seek A function for seeking to specified byte position, may be NULL.\n *\n * @return Allocated AVIOContext or NULL on failure.\n */\nAVIOContext *avio_alloc_context(\n                  unsigned char *buffer,\n                  int buffer_size,\n                  int write_flag,\n                  void *opaque,\n                  int (*read_packet)(void *opaque, uint8_t *buf, int buf_size),\n                  int (*write_packet)(void *opaque, uint8_t *buf, int buf_size),\n                  int64_t (*seek)(void *opaque, int64_t offset, int whence));\n\n/**\n * Free the supplied IO context and everything associated with it.\n *\n * @param s Double pointer to the IO context. This function will write NULL\n * into s.\n */\nvoid avio_context_free(AVIOContext **s);\n\nvoid avio_w8(AVIOContext *s, int b);\nvoid avio_write(AVIOContext *s, const unsigned char *buf, int size);\nvoid avio_wl64(AVIOContext *s, uint64_t val);\nvoid avio_wb64(AVIOContext *s, uint64_t val);\nvoid avio_wl32(AVIOContext *s, unsigned int val);\nvoid avio_wb32(AVIOContext *s, unsigned int val);\nvoid avio_wl24(AVIOContext *s, unsigned int val);\nvoid avio_wb24(AVIOContext *s, unsigned int val);\nvoid avio_wl16(AVIOContext *s, unsigned int val);\nvoid avio_wb16(AVIOContext *s, unsigned int val);\n\n/**\n * Write a NULL-terminated string.\n * @return number of bytes written.\n */\nint avio_put_str(AVIOContext *s, const char *str);\n\n/**\n * Convert an UTF-8 string to UTF-16LE and write it.\n * @param s the AVIOContext\n * @param str NULL-terminated UTF-8 string\n *\n * @return number of bytes written.\n */\nint avio_put_str16le(AVIOContext *s, const char *str);\n\n/**\n * Convert an UTF-8 string to UTF-16BE and write it.\n * @param s the AVIOContext\n * @param str NULL-terminated UTF-8 string\n *\n * @return number of bytes written.\n */\nint avio_put_str16be(AVIOContext *s, const char *str);\n\n/**\n * Mark the written bytestream as a specific type.\n *\n * Zero-length ranges are omitted from the output.\n *\n * @param time the stream time the current bytestream pos corresponds to\n *             (in AV_TIME_BASE units), or AV_NOPTS_VALUE if unknown or not\n *             applicable\n * @param type the kind of data written starting at the current pos\n */\nvoid avio_write_marker(AVIOContext *s, int64_t time, enum AVIODataMarkerType type);\n\n/**\n * ORing this as the \"whence\" parameter to a seek function causes it to\n * return the filesize without seeking anywhere. Supporting this is optional.\n * If it is not supported then the seek function will return <0.\n */\n#define AVSEEK_SIZE 0x10000\n\n/**\n * Passing this flag as the \"whence\" parameter to a seek function causes it to\n * seek by any means (like reopening and linear reading) or other normally unreasonable\n * means that can be extremely slow.\n * This may be ignored by the seek code.\n */\n#define AVSEEK_FORCE 0x20000\n\n/**\n * fseek() equivalent for AVIOContext.\n * @return new position or AVERROR.\n */\nint64_t avio_seek(AVIOContext *s, int64_t offset, int whence);\n\n/**\n * Skip given number of bytes forward\n * @return new position or AVERROR.\n */\nint64_t avio_skip(AVIOContext *s, int64_t offset);\n\n/**\n * ftell() equivalent for AVIOContext.\n * @return position or AVERROR.\n */\nstatic av_always_inline int64_t avio_tell(AVIOContext *s)\n{\n    return avio_seek(s, 0, SEEK_CUR);\n}\n\n/**\n * Get the filesize.\n * @return filesize or AVERROR\n */\nint64_t avio_size(AVIOContext *s);\n\n/**\n * feof() equivalent for AVIOContext.\n * @return non zero if and only if end of file\n */\nint avio_feof(AVIOContext *s);\n\n/** @warning Writes up to 4 KiB per call */\nint avio_printf(AVIOContext *s, const char *fmt, ...) av_printf_format(2, 3);\n\n/**\n * Force flushing of buffered data.\n *\n * For write streams, force the buffered data to be immediately written to the output,\n * without to wait to fill the internal buffer.\n *\n * For read streams, discard all currently buffered data, and advance the\n * reported file position to that of the underlying stream. This does not\n * read new data, and does not perform any seeks.\n */\nvoid avio_flush(AVIOContext *s);\n\n/**\n * Read size bytes from AVIOContext into buf.\n * @return number of bytes read or AVERROR\n */\nint avio_read(AVIOContext *s, unsigned char *buf, int size);\n\n/**\n * Read size bytes from AVIOContext into buf. Unlike avio_read(), this is allowed\n * to read fewer bytes than requested. The missing bytes can be read in the next\n * call. This always tries to read at least 1 byte.\n * Useful to reduce latency in certain cases.\n * @return number of bytes read or AVERROR\n */\nint avio_read_partial(AVIOContext *s, unsigned char *buf, int size);\n\n/**\n * @name Functions for reading from AVIOContext\n * @{\n *\n * @note return 0 if EOF, so you cannot use it if EOF handling is\n *       necessary\n */\nint          avio_r8  (AVIOContext *s);\nunsigned int avio_rl16(AVIOContext *s);\nunsigned int avio_rl24(AVIOContext *s);\nunsigned int avio_rl32(AVIOContext *s);\nuint64_t     avio_rl64(AVIOContext *s);\nunsigned int avio_rb16(AVIOContext *s);\nunsigned int avio_rb24(AVIOContext *s);\nunsigned int avio_rb32(AVIOContext *s);\nuint64_t     avio_rb64(AVIOContext *s);\n/**\n * @}\n */\n\n/**\n * Read a string from pb into buf. The reading will terminate when either\n * a NULL character was encountered, maxlen bytes have been read, or nothing\n * more can be read from pb. The result is guaranteed to be NULL-terminated, it\n * will be truncated if buf is too small.\n * Note that the string is not interpreted or validated in any way, it\n * might get truncated in the middle of a sequence for multi-byte encodings.\n *\n * @return number of bytes read (is always <= maxlen).\n * If reading ends on EOF or error, the return value will be one more than\n * bytes actually read.\n */\nint avio_get_str(AVIOContext *pb, int maxlen, char *buf, int buflen);\n\n/**\n * Read a UTF-16 string from pb and convert it to UTF-8.\n * The reading will terminate when either a null or invalid character was\n * encountered or maxlen bytes have been read.\n * @return number of bytes read (is always <= maxlen)\n */\nint avio_get_str16le(AVIOContext *pb, int maxlen, char *buf, int buflen);\nint avio_get_str16be(AVIOContext *pb, int maxlen, char *buf, int buflen);\n\n\n/**\n * @name URL open modes\n * The flags argument to avio_open must be one of the following\n * constants, optionally ORed with other flags.\n * @{\n */\n#define AVIO_FLAG_READ  1                                      /**< read-only */\n#define AVIO_FLAG_WRITE 2                                      /**< write-only */\n#define AVIO_FLAG_READ_WRITE (AVIO_FLAG_READ|AVIO_FLAG_WRITE)  /**< read-write pseudo flag */\n/**\n * @}\n */\n\n/**\n * Use non-blocking mode.\n * If this flag is set, operations on the context will return\n * AVERROR(EAGAIN) if they can not be performed immediately.\n * If this flag is not set, operations on the context will never return\n * AVERROR(EAGAIN).\n * Note that this flag does not affect the opening/connecting of the\n * context. Connecting a protocol will always block if necessary (e.g. on\n * network protocols) but never hang (e.g. on busy devices).\n * Warning: non-blocking protocols is work-in-progress; this flag may be\n * silently ignored.\n */\n#define AVIO_FLAG_NONBLOCK 8\n\n/**\n * Use direct mode.\n * avio_read and avio_write should if possible be satisfied directly\n * instead of going through a buffer, and avio_seek will always\n * call the underlying seek function directly.\n */\n#define AVIO_FLAG_DIRECT 0x8000\n\n/**\n * Create and initialize a AVIOContext for accessing the\n * resource indicated by url.\n * @note When the resource indicated by url has been opened in\n * read+write mode, the AVIOContext can be used only for writing.\n *\n * @param s Used to return the pointer to the created AVIOContext.\n * In case of failure the pointed to value is set to NULL.\n * @param url resource to access\n * @param flags flags which control how the resource indicated by url\n * is to be opened\n * @return >= 0 in case of success, a negative value corresponding to an\n * AVERROR code in case of failure\n */\nint avio_open(AVIOContext **s, const char *url, int flags);\n\n/**\n * Create and initialize a AVIOContext for accessing the\n * resource indicated by url.\n * @note When the resource indicated by url has been opened in\n * read+write mode, the AVIOContext can be used only for writing.\n *\n * @param s Used to return the pointer to the created AVIOContext.\n * In case of failure the pointed to value is set to NULL.\n * @param url resource to access\n * @param flags flags which control how the resource indicated by url\n * is to be opened\n * @param int_cb an interrupt callback to be used at the protocols level\n * @param options  A dictionary filled with protocol-private options. On return\n * this parameter will be destroyed and replaced with a dict containing options\n * that were not found. May be NULL.\n * @return >= 0 in case of success, a negative value corresponding to an\n * AVERROR code in case of failure\n */\nint avio_open2(AVIOContext **s, const char *url, int flags,\n               const AVIOInterruptCB *int_cb, AVDictionary **options);\n\n/**\n * Close the resource accessed by the AVIOContext s and free it.\n * This function can only be used if s was opened by avio_open().\n *\n * The internal buffer is automatically flushed before closing the\n * resource.\n *\n * @return 0 on success, an AVERROR < 0 on error.\n * @see avio_closep\n */\nint avio_close(AVIOContext *s);\n\n/**\n * Close the resource accessed by the AVIOContext *s, free it\n * and set the pointer pointing to it to NULL.\n * This function can only be used if s was opened by avio_open().\n *\n * The internal buffer is automatically flushed before closing the\n * resource.\n *\n * @return 0 on success, an AVERROR < 0 on error.\n * @see avio_close\n */\nint avio_closep(AVIOContext **s);\n\n\n/**\n * Open a write only memory stream.\n *\n * @param s new IO context\n * @return zero if no error.\n */\nint avio_open_dyn_buf(AVIOContext **s);\n\n/**\n * Return the written size and a pointer to the buffer.\n * The AVIOContext stream is left intact.\n * The buffer must NOT be freed.\n * No padding is added to the buffer.\n *\n * @param s IO context\n * @param pbuffer pointer to a byte buffer\n * @return the length of the byte buffer\n */\nint avio_get_dyn_buf(AVIOContext *s, uint8_t **pbuffer);\n\n/**\n * Return the written size and a pointer to the buffer. The buffer\n * must be freed with av_free().\n * Padding of AV_INPUT_BUFFER_PADDING_SIZE is added to the buffer.\n *\n * @param s IO context\n * @param pbuffer pointer to a byte buffer\n * @return the length of the byte buffer\n */\nint avio_close_dyn_buf(AVIOContext *s, uint8_t **pbuffer);\n\n/**\n * Iterate through names of available protocols.\n *\n * @param opaque A private pointer representing current protocol.\n *        It must be a pointer to NULL on first iteration and will\n *        be updated by successive calls to avio_enum_protocols.\n * @param output If set to 1, iterate over output protocols,\n *               otherwise over input protocols.\n *\n * @return A static string containing the name of current protocol or NULL\n */\nconst char *avio_enum_protocols(void **opaque, int output);\n\n/**\n * Pause and resume playing - only meaningful if using a network streaming\n * protocol (e.g. MMS).\n *\n * @param h     IO context from which to call the read_pause function pointer\n * @param pause 1 for pause, 0 for resume\n */\nint     avio_pause(AVIOContext *h, int pause);\n\n/**\n * Seek to a given timestamp relative to some component stream.\n * Only meaningful if using a network streaming protocol (e.g. MMS.).\n *\n * @param h IO context from which to call the seek function pointers\n * @param stream_index The stream index that the timestamp is relative to.\n *        If stream_index is (-1) the timestamp should be in AV_TIME_BASE\n *        units from the beginning of the presentation.\n *        If a stream_index >= 0 is used and the protocol does not support\n *        seeking based on component streams, the call will fail.\n * @param timestamp timestamp in AVStream.time_base units\n *        or if there is no stream specified then in AV_TIME_BASE units.\n * @param flags Optional combination of AVSEEK_FLAG_BACKWARD, AVSEEK_FLAG_BYTE\n *        and AVSEEK_FLAG_ANY. The protocol may silently ignore\n *        AVSEEK_FLAG_BACKWARD and AVSEEK_FLAG_ANY, but AVSEEK_FLAG_BYTE will\n *        fail if used and not supported.\n * @return >= 0 on success\n * @see AVInputFormat::read_seek\n */\nint64_t avio_seek_time(AVIOContext *h, int stream_index,\n                       int64_t timestamp, int flags);\n\n/* Avoid a warning. The header can not be included because it breaks c++. */\nstruct AVBPrint;\n\n/**\n * Read contents of h into print buffer, up to max_size bytes, or up to EOF.\n *\n * @return 0 for success (max_size bytes read or EOF reached), negative error\n * code otherwise\n */\nint avio_read_to_bprint(AVIOContext *h, struct AVBPrint *pb, size_t max_size);\n\n/**\n * Accept and allocate a client context on a server context.\n * @param  s the server context\n * @param  c the client context, must be unallocated\n * @return   >= 0 on success or a negative value corresponding\n *           to an AVERROR on failure\n */\nint avio_accept(AVIOContext *s, AVIOContext **c);\n\n/**\n * Perform one step of the protocol handshake to accept a new client.\n * This function must be called on a client returned by avio_accept() before\n * using it as a read/write context.\n * It is separate from avio_accept() because it may block.\n * A step of the handshake is defined by places where the application may\n * decide to change the proceedings.\n * For example, on a protocol with a request header and a reply header, each\n * one can constitute a step because the application may use the parameters\n * from the request to change parameters in the reply; or each individual\n * chunk of the request can constitute a step.\n * If the handshake is already finished, avio_handshake() does nothing and\n * returns 0 immediately.\n *\n * @param  c the client context to perform the handshake on\n * @return   0   on a complete and successful handshake\n *           > 0 if the handshake progressed, but is not complete\n *           < 0 for an AVERROR code\n */\nint avio_handshake(AVIOContext *c);\n#endif /* AVFORMAT_AVIO_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavformat/version.h",
    "content": "/*\n * Version macros.\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVFORMAT_VERSION_H\n#define AVFORMAT_VERSION_H\n\n/**\n * @file\n * @ingroup libavf\n * Libavformat version macros\n */\n\n#include \"libavutil/version.h\"\n\n// Major bumping may affect Ticket5467, 5421, 5451(compatibility with Chromium)\n// Also please add any ticket numbers that you believe might be affected here\n#define LIBAVFORMAT_VERSION_MAJOR  58\n#define LIBAVFORMAT_VERSION_MINOR  12\n#define LIBAVFORMAT_VERSION_MICRO 100\n\n#define LIBAVFORMAT_VERSION_INT AV_VERSION_INT(LIBAVFORMAT_VERSION_MAJOR, \\\n                                               LIBAVFORMAT_VERSION_MINOR, \\\n                                               LIBAVFORMAT_VERSION_MICRO)\n#define LIBAVFORMAT_VERSION     AV_VERSION(LIBAVFORMAT_VERSION_MAJOR,   \\\n                                           LIBAVFORMAT_VERSION_MINOR,   \\\n                                           LIBAVFORMAT_VERSION_MICRO)\n#define LIBAVFORMAT_BUILD       LIBAVFORMAT_VERSION_INT\n\n#define LIBAVFORMAT_IDENT       \"Lavf\" AV_STRINGIFY(LIBAVFORMAT_VERSION)\n\n/**\n * FF_API_* defines may be placed below to indicate public API that will be\n * dropped at a future version bump. The defines themselves are not part of\n * the public API and may change, break or disappear at any time.\n *\n * @note, when bumping the major version it is recommended to manually\n * disable each FF_API_* in its own commit instead of disabling them all\n * at once through the bump. This improves the git bisect-ability of the change.\n *\n */\n#ifndef FF_API_COMPUTE_PKT_FIELDS2\n#define FF_API_COMPUTE_PKT_FIELDS2      (LIBAVFORMAT_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_OLD_OPEN_CALLBACKS\n#define FF_API_OLD_OPEN_CALLBACKS       (LIBAVFORMAT_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_LAVF_AVCTX\n#define FF_API_LAVF_AVCTX               (LIBAVFORMAT_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_HTTP_USER_AGENT\n#define FF_API_HTTP_USER_AGENT          (LIBAVFORMAT_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_HLS_WRAP\n#define FF_API_HLS_WRAP                 (LIBAVFORMAT_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_LAVF_KEEPSIDE_FLAG\n#define FF_API_LAVF_KEEPSIDE_FLAG       (LIBAVFORMAT_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_OLD_ROTATE_API\n#define FF_API_OLD_ROTATE_API           (LIBAVFORMAT_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_FORMAT_GET_SET\n#define FF_API_FORMAT_GET_SET           (LIBAVFORMAT_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_OLD_AVIO_EOF_0\n#define FF_API_OLD_AVIO_EOF_0           (LIBAVFORMAT_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_LAVF_FFSERVER\n#define FF_API_LAVF_FFSERVER            (LIBAVFORMAT_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_FORMAT_FILENAME\n#define FF_API_FORMAT_FILENAME          (LIBAVFORMAT_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_OLD_RTSP_OPTIONS\n#define FF_API_OLD_RTSP_OPTIONS         (LIBAVFORMAT_VERSION_MAJOR < 59)\n#endif\n#ifndef FF_API_NEXT\n#define FF_API_NEXT                     (LIBAVFORMAT_VERSION_MAJOR < 59)\n#endif\n\n\n#ifndef FF_API_R_FRAME_RATE\n#define FF_API_R_FRAME_RATE            1\n#endif\n#endif /* AVFORMAT_VERSION_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/adler32.h",
    "content": "/*\n * copyright (c) 2006 Mans Rullgard\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * @ingroup lavu_adler32\n * Public header for Adler-32 hash function implementation.\n */\n\n#ifndef AVUTIL_ADLER32_H\n#define AVUTIL_ADLER32_H\n\n#include <stdint.h>\n#include \"attributes.h\"\n\n/**\n * @defgroup lavu_adler32 Adler-32\n * @ingroup lavu_hash\n * Adler-32 hash function implementation.\n *\n * @{\n */\n\n/**\n * Calculate the Adler32 checksum of a buffer.\n *\n * Passing the return value to a subsequent av_adler32_update() call\n * allows the checksum of multiple buffers to be calculated as though\n * they were concatenated.\n *\n * @param adler initial checksum value\n * @param buf   pointer to input buffer\n * @param len   size of input buffer\n * @return      updated checksum\n */\nunsigned long av_adler32_update(unsigned long adler, const uint8_t *buf,\n                                unsigned int len) av_pure;\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_ADLER32_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/aes.h",
    "content": "/*\n * copyright (c) 2007 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_AES_H\n#define AVUTIL_AES_H\n\n#include <stdint.h>\n\n#include \"attributes.h\"\n#include \"version.h\"\n\n/**\n * @defgroup lavu_aes AES\n * @ingroup lavu_crypto\n * @{\n */\n\nextern const int av_aes_size;\n\nstruct AVAES;\n\n/**\n * Allocate an AVAES context.\n */\nstruct AVAES *av_aes_alloc(void);\n\n/**\n * Initialize an AVAES context.\n * @param key_bits 128, 192 or 256\n * @param decrypt 0 for encryption, 1 for decryption\n */\nint av_aes_init(struct AVAES *a, const uint8_t *key, int key_bits, int decrypt);\n\n/**\n * Encrypt or decrypt a buffer using a previously initialized context.\n * @param count number of 16 byte blocks\n * @param dst destination array, can be equal to src\n * @param src source array, can be equal to dst\n * @param iv initialization vector for CBC mode, if NULL then ECB will be used\n * @param decrypt 0 for encryption, 1 for decryption\n */\nvoid av_aes_crypt(struct AVAES *a, uint8_t *dst, const uint8_t *src, int count, uint8_t *iv, int decrypt);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_AES_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/aes_ctr.h",
    "content": "/*\n * AES-CTR cipher\n * Copyright (c) 2015 Eran Kornblau <erankor at gmail dot com>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_AES_CTR_H\n#define AVUTIL_AES_CTR_H\n\n#include <stdint.h>\n\n#include \"attributes.h\"\n#include \"version.h\"\n\n#define AES_CTR_KEY_SIZE (16)\n#define AES_CTR_IV_SIZE (8)\n\nstruct AVAESCTR;\n\n/**\n * Allocate an AVAESCTR context.\n */\nstruct AVAESCTR *av_aes_ctr_alloc(void);\n\n/**\n * Initialize an AVAESCTR context.\n * @param key encryption key, must have a length of AES_CTR_KEY_SIZE\n */\nint av_aes_ctr_init(struct AVAESCTR *a, const uint8_t *key);\n\n/**\n * Release an AVAESCTR context.\n */\nvoid av_aes_ctr_free(struct AVAESCTR *a);\n\n/**\n * Process a buffer using a previously initialized context.\n * @param dst destination array, can be equal to src\n * @param src source array, can be equal to dst\n * @param size the size of src and dst\n */\nvoid av_aes_ctr_crypt(struct AVAESCTR *a, uint8_t *dst, const uint8_t *src, int size);\n\n/**\n * Get the current iv\n */\nconst uint8_t* av_aes_ctr_get_iv(struct AVAESCTR *a);\n\n/**\n * Generate a random iv\n */\nvoid av_aes_ctr_set_random_iv(struct AVAESCTR *a);\n\n/**\n * Forcefully change the 8-byte iv\n */\nvoid av_aes_ctr_set_iv(struct AVAESCTR *a, const uint8_t* iv);\n\n/**\n * Forcefully change the \"full\" 16-byte iv, including the counter\n */\nvoid av_aes_ctr_set_full_iv(struct AVAESCTR *a, const uint8_t* iv);\n\n/**\n * Increment the top 64 bit of the iv (performed after each frame)\n */\nvoid av_aes_ctr_increment_iv(struct AVAESCTR *a);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_AES_CTR_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/attributes.h",
    "content": "/*\n * copyright (c) 2006 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * Macro definitions for various function/variable attributes\n */\n\n#ifndef AVUTIL_ATTRIBUTES_H\n#define AVUTIL_ATTRIBUTES_H\n\n#ifdef __GNUC__\n#    define AV_GCC_VERSION_AT_LEAST(x,y) (__GNUC__ > (x) || __GNUC__ == (x) && __GNUC_MINOR__ >= (y))\n#    define AV_GCC_VERSION_AT_MOST(x,y)  (__GNUC__ < (x) || __GNUC__ == (x) && __GNUC_MINOR__ <= (y))\n#else\n#    define AV_GCC_VERSION_AT_LEAST(x,y) 0\n#    define AV_GCC_VERSION_AT_MOST(x,y)  0\n#endif\n\n#ifndef av_always_inline\n#if AV_GCC_VERSION_AT_LEAST(3,1)\n#    define av_always_inline __attribute__((always_inline)) inline\n#elif defined(_MSC_VER)\n#    define av_always_inline __forceinline\n#else\n#    define av_always_inline inline\n#endif\n#endif\n\n#ifndef av_extern_inline\n#if defined(__ICL) && __ICL >= 1210 || defined(__GNUC_STDC_INLINE__)\n#    define av_extern_inline extern inline\n#else\n#    define av_extern_inline inline\n#endif\n#endif\n\n#if AV_GCC_VERSION_AT_LEAST(3,4)\n#    define av_warn_unused_result __attribute__((warn_unused_result))\n#else\n#    define av_warn_unused_result\n#endif\n\n#if AV_GCC_VERSION_AT_LEAST(3,1)\n#    define av_noinline __attribute__((noinline))\n#elif defined(_MSC_VER)\n#    define av_noinline __declspec(noinline)\n#else\n#    define av_noinline\n#endif\n\n#if AV_GCC_VERSION_AT_LEAST(3,1) || defined(__clang__)\n#    define av_pure __attribute__((pure))\n#else\n#    define av_pure\n#endif\n\n#if AV_GCC_VERSION_AT_LEAST(2,6) || defined(__clang__)\n#    define av_const __attribute__((const))\n#else\n#    define av_const\n#endif\n\n#if AV_GCC_VERSION_AT_LEAST(4,3) || defined(__clang__)\n#    define av_cold __attribute__((cold))\n#else\n#    define av_cold\n#endif\n\n#if AV_GCC_VERSION_AT_LEAST(4,1) && !defined(__llvm__)\n#    define av_flatten __attribute__((flatten))\n#else\n#    define av_flatten\n#endif\n\n#if AV_GCC_VERSION_AT_LEAST(3,1)\n#    define attribute_deprecated __attribute__((deprecated))\n#elif defined(_MSC_VER)\n#    define attribute_deprecated __declspec(deprecated)\n#else\n#    define attribute_deprecated\n#endif\n\n/**\n * Disable warnings about deprecated features\n * This is useful for sections of code kept for backward compatibility and\n * scheduled for removal.\n */\n#ifndef AV_NOWARN_DEPRECATED\n#if AV_GCC_VERSION_AT_LEAST(4,6)\n#    define AV_NOWARN_DEPRECATED(code) \\\n        _Pragma(\"GCC diagnostic push\") \\\n        _Pragma(\"GCC diagnostic ignored \\\"-Wdeprecated-declarations\\\"\") \\\n        code \\\n        _Pragma(\"GCC diagnostic pop\")\n#elif defined(_MSC_VER)\n#    define AV_NOWARN_DEPRECATED(code) \\\n        __pragma(warning(push)) \\\n        __pragma(warning(disable : 4996)) \\\n        code; \\\n        __pragma(warning(pop))\n#else\n#    define AV_NOWARN_DEPRECATED(code) code\n#endif\n#endif\n\n#if defined(__GNUC__) || defined(__clang__)\n#    define av_unused __attribute__((unused))\n#else\n#    define av_unused\n#endif\n\n/**\n * Mark a variable as used and prevent the compiler from optimizing it\n * away.  This is useful for variables accessed only from inline\n * assembler without the compiler being aware.\n */\n#if AV_GCC_VERSION_AT_LEAST(3,1) || defined(__clang__)\n#    define av_used __attribute__((used))\n#else\n#    define av_used\n#endif\n\n#if AV_GCC_VERSION_AT_LEAST(3,3) || defined(__clang__)\n#   define av_alias __attribute__((may_alias))\n#else\n#   define av_alias\n#endif\n\n#if (defined(__GNUC__) || defined(__clang__)) && !defined(__INTEL_COMPILER)\n#    define av_uninit(x) x=x\n#else\n#    define av_uninit(x) x\n#endif\n\n#if defined(__GNUC__) || defined(__clang__)\n#    define av_builtin_constant_p __builtin_constant_p\n#    define av_printf_format(fmtpos, attrpos) __attribute__((__format__(__printf__, fmtpos, attrpos)))\n#else\n#    define av_builtin_constant_p(x) 0\n#    define av_printf_format(fmtpos, attrpos)\n#endif\n\n#if AV_GCC_VERSION_AT_LEAST(2,5) || defined(__clang__)\n#    define av_noreturn __attribute__((noreturn))\n#else\n#    define av_noreturn\n#endif\n\n#endif /* AVUTIL_ATTRIBUTES_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/audio_fifo.h",
    "content": "/*\n * Audio FIFO\n * Copyright (c) 2012 Justin Ruggles <justin.ruggles@gmail.com>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * Audio FIFO Buffer\n */\n\n#ifndef AVUTIL_AUDIO_FIFO_H\n#define AVUTIL_AUDIO_FIFO_H\n\n#include \"avutil.h\"\n#include \"fifo.h\"\n#include \"samplefmt.h\"\n\n/**\n * @addtogroup lavu_audio\n * @{\n *\n * @defgroup lavu_audiofifo Audio FIFO Buffer\n * @{\n */\n\n/**\n * Context for an Audio FIFO Buffer.\n *\n * - Operates at the sample level rather than the byte level.\n * - Supports multiple channels with either planar or packed sample format.\n * - Automatic reallocation when writing to a full buffer.\n */\ntypedef struct AVAudioFifo AVAudioFifo;\n\n/**\n * Free an AVAudioFifo.\n *\n * @param af  AVAudioFifo to free\n */\nvoid av_audio_fifo_free(AVAudioFifo *af);\n\n/**\n * Allocate an AVAudioFifo.\n *\n * @param sample_fmt  sample format\n * @param channels    number of channels\n * @param nb_samples  initial allocation size, in samples\n * @return            newly allocated AVAudioFifo, or NULL on error\n */\nAVAudioFifo *av_audio_fifo_alloc(enum AVSampleFormat sample_fmt, int channels,\n                                 int nb_samples);\n\n/**\n * Reallocate an AVAudioFifo.\n *\n * @param af          AVAudioFifo to reallocate\n * @param nb_samples  new allocation size, in samples\n * @return            0 if OK, or negative AVERROR code on failure\n */\nav_warn_unused_result\nint av_audio_fifo_realloc(AVAudioFifo *af, int nb_samples);\n\n/**\n * Write data to an AVAudioFifo.\n *\n * The AVAudioFifo will be reallocated automatically if the available space\n * is less than nb_samples.\n *\n * @see enum AVSampleFormat\n * The documentation for AVSampleFormat describes the data layout.\n *\n * @param af          AVAudioFifo to write to\n * @param data        audio data plane pointers\n * @param nb_samples  number of samples to write\n * @return            number of samples actually written, or negative AVERROR\n *                    code on failure. If successful, the number of samples\n *                    actually written will always be nb_samples.\n */\nint av_audio_fifo_write(AVAudioFifo *af, void **data, int nb_samples);\n\n/**\n * Peek data from an AVAudioFifo.\n *\n * @see enum AVSampleFormat\n * The documentation for AVSampleFormat describes the data layout.\n *\n * @param af          AVAudioFifo to read from\n * @param data        audio data plane pointers\n * @param nb_samples  number of samples to peek\n * @return            number of samples actually peek, or negative AVERROR code\n *                    on failure. The number of samples actually peek will not\n *                    be greater than nb_samples, and will only be less than\n *                    nb_samples if av_audio_fifo_size is less than nb_samples.\n */\nint av_audio_fifo_peek(AVAudioFifo *af, void **data, int nb_samples);\n\n/**\n * Peek data from an AVAudioFifo.\n *\n * @see enum AVSampleFormat\n * The documentation for AVSampleFormat describes the data layout.\n *\n * @param af          AVAudioFifo to read from\n * @param data        audio data plane pointers\n * @param nb_samples  number of samples to peek\n * @param offset      offset from current read position\n * @return            number of samples actually peek, or negative AVERROR code\n *                    on failure. The number of samples actually peek will not\n *                    be greater than nb_samples, and will only be less than\n *                    nb_samples if av_audio_fifo_size is less than nb_samples.\n */\nint av_audio_fifo_peek_at(AVAudioFifo *af, void **data, int nb_samples, int offset);\n\n/**\n * Read data from an AVAudioFifo.\n *\n * @see enum AVSampleFormat\n * The documentation for AVSampleFormat describes the data layout.\n *\n * @param af          AVAudioFifo to read from\n * @param data        audio data plane pointers\n * @param nb_samples  number of samples to read\n * @return            number of samples actually read, or negative AVERROR code\n *                    on failure. The number of samples actually read will not\n *                    be greater than nb_samples, and will only be less than\n *                    nb_samples if av_audio_fifo_size is less than nb_samples.\n */\nint av_audio_fifo_read(AVAudioFifo *af, void **data, int nb_samples);\n\n/**\n * Drain data from an AVAudioFifo.\n *\n * Removes the data without reading it.\n *\n * @param af          AVAudioFifo to drain\n * @param nb_samples  number of samples to drain\n * @return            0 if OK, or negative AVERROR code on failure\n */\nint av_audio_fifo_drain(AVAudioFifo *af, int nb_samples);\n\n/**\n * Reset the AVAudioFifo buffer.\n *\n * This empties all data in the buffer.\n *\n * @param af  AVAudioFifo to reset\n */\nvoid av_audio_fifo_reset(AVAudioFifo *af);\n\n/**\n * Get the current number of samples in the AVAudioFifo available for reading.\n *\n * @param af  the AVAudioFifo to query\n * @return    number of samples available for reading\n */\nint av_audio_fifo_size(AVAudioFifo *af);\n\n/**\n * Get the current number of samples in the AVAudioFifo available for writing.\n *\n * @param af  the AVAudioFifo to query\n * @return    number of samples available for writing\n */\nint av_audio_fifo_space(AVAudioFifo *af);\n\n/**\n * @}\n * @}\n */\n\n#endif /* AVUTIL_AUDIO_FIFO_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/avassert.h",
    "content": "/*\n * copyright (c) 2010 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * simple assert() macros that are a bit more flexible than ISO C assert().\n * @author Michael Niedermayer <michaelni@gmx.at>\n */\n\n#ifndef AVUTIL_AVASSERT_H\n#define AVUTIL_AVASSERT_H\n\n#include <stdlib.h>\n#include \"avutil.h\"\n#include \"log.h\"\n\n/**\n * assert() equivalent, that is always enabled.\n */\n#define av_assert0(cond) do {                                           \\\n    if (!(cond)) {                                                      \\\n        av_log(NULL, AV_LOG_PANIC, \"Assertion %s failed at %s:%d\\n\",    \\\n               AV_STRINGIFY(cond), __FILE__, __LINE__);                 \\\n        abort();                                                        \\\n    }                                                                   \\\n} while (0)\n\n\n/**\n * assert() equivalent, that does not lie in speed critical code.\n * These asserts() thus can be enabled without fearing speed loss.\n */\n#if defined(ASSERT_LEVEL) && ASSERT_LEVEL > 0\n#define av_assert1(cond) av_assert0(cond)\n#else\n#define av_assert1(cond) ((void)0)\n#endif\n\n\n/**\n * assert() equivalent, that does lie in speed critical code.\n */\n#if defined(ASSERT_LEVEL) && ASSERT_LEVEL > 1\n#define av_assert2(cond) av_assert0(cond)\n#define av_assert2_fpu() av_assert0_fpu()\n#else\n#define av_assert2(cond) ((void)0)\n#define av_assert2_fpu() ((void)0)\n#endif\n\n/**\n * Assert that floating point opperations can be executed.\n *\n * This will av_assert0() that the cpu is not in MMX state on X86\n */\nvoid av_assert0_fpu(void);\n\n#endif /* AVUTIL_AVASSERT_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/avconfig.h",
    "content": "/* Generated by ffmpeg configure */\n#ifndef AVUTIL_AVCONFIG_H\n#define AVUTIL_AVCONFIG_H\n#define AV_HAVE_BIGENDIAN 0\n#define AV_HAVE_FAST_UNALIGNED 1\n#endif /* AVUTIL_AVCONFIG_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/avstring.h",
    "content": "/*\n * Copyright (c) 2007 Mans Rullgard\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_AVSTRING_H\n#define AVUTIL_AVSTRING_H\n\n#include <stddef.h>\n#include <stdint.h>\n#include \"attributes.h\"\n\n/**\n * @addtogroup lavu_string\n * @{\n */\n\n/**\n * Return non-zero if pfx is a prefix of str. If it is, *ptr is set to\n * the address of the first character in str after the prefix.\n *\n * @param str input string\n * @param pfx prefix to test\n * @param ptr updated if the prefix is matched inside str\n * @return non-zero if the prefix matches, zero otherwise\n */\nint av_strstart(const char *str, const char *pfx, const char **ptr);\n\n/**\n * Return non-zero if pfx is a prefix of str independent of case. If\n * it is, *ptr is set to the address of the first character in str\n * after the prefix.\n *\n * @param str input string\n * @param pfx prefix to test\n * @param ptr updated if the prefix is matched inside str\n * @return non-zero if the prefix matches, zero otherwise\n */\nint av_stristart(const char *str, const char *pfx, const char **ptr);\n\n/**\n * Locate the first case-independent occurrence in the string haystack\n * of the string needle.  A zero-length string needle is considered to\n * match at the start of haystack.\n *\n * This function is a case-insensitive version of the standard strstr().\n *\n * @param haystack string to search in\n * @param needle   string to search for\n * @return         pointer to the located match within haystack\n *                 or a null pointer if no match\n */\nchar *av_stristr(const char *haystack, const char *needle);\n\n/**\n * Locate the first occurrence of the string needle in the string haystack\n * where not more than hay_length characters are searched. A zero-length\n * string needle is considered to match at the start of haystack.\n *\n * This function is a length-limited version of the standard strstr().\n *\n * @param haystack   string to search in\n * @param needle     string to search for\n * @param hay_length length of string to search in\n * @return           pointer to the located match within haystack\n *                   or a null pointer if no match\n */\nchar *av_strnstr(const char *haystack, const char *needle, size_t hay_length);\n\n/**\n * Copy the string src to dst, but no more than size - 1 bytes, and\n * null-terminate dst.\n *\n * This function is the same as BSD strlcpy().\n *\n * @param dst destination buffer\n * @param src source string\n * @param size size of destination buffer\n * @return the length of src\n *\n * @warning since the return value is the length of src, src absolutely\n * _must_ be a properly 0-terminated string, otherwise this will read beyond\n * the end of the buffer and possibly crash.\n */\nsize_t av_strlcpy(char *dst, const char *src, size_t size);\n\n/**\n * Append the string src to the string dst, but to a total length of\n * no more than size - 1 bytes, and null-terminate dst.\n *\n * This function is similar to BSD strlcat(), but differs when\n * size <= strlen(dst).\n *\n * @param dst destination buffer\n * @param src source string\n * @param size size of destination buffer\n * @return the total length of src and dst\n *\n * @warning since the return value use the length of src and dst, these\n * absolutely _must_ be a properly 0-terminated strings, otherwise this\n * will read beyond the end of the buffer and possibly crash.\n */\nsize_t av_strlcat(char *dst, const char *src, size_t size);\n\n/**\n * Append output to a string, according to a format. Never write out of\n * the destination buffer, and always put a terminating 0 within\n * the buffer.\n * @param dst destination buffer (string to which the output is\n *  appended)\n * @param size total size of the destination buffer\n * @param fmt printf-compatible format string, specifying how the\n *  following parameters are used\n * @return the length of the string that would have been generated\n *  if enough space had been available\n */\nsize_t av_strlcatf(char *dst, size_t size, const char *fmt, ...) av_printf_format(3, 4);\n\n/**\n * Get the count of continuous non zero chars starting from the beginning.\n *\n * @param len maximum number of characters to check in the string, that\n *            is the maximum value which is returned by the function\n */\nstatic inline size_t av_strnlen(const char *s, size_t len)\n{\n    size_t i;\n    for (i = 0; i < len && s[i]; i++)\n        ;\n    return i;\n}\n\n/**\n * Print arguments following specified format into a large enough auto\n * allocated buffer. It is similar to GNU asprintf().\n * @param fmt printf-compatible format string, specifying how the\n *            following parameters are used.\n * @return the allocated string\n * @note You have to free the string yourself with av_free().\n */\nchar *av_asprintf(const char *fmt, ...) av_printf_format(1, 2);\n\n/**\n * Convert a number to an av_malloced string.\n */\nchar *av_d2str(double d);\n\n/**\n * Unescape the given string until a non escaped terminating char,\n * and return the token corresponding to the unescaped string.\n *\n * The normal \\ and ' escaping is supported. Leading and trailing\n * whitespaces are removed, unless they are escaped with '\\' or are\n * enclosed between ''.\n *\n * @param buf the buffer to parse, buf will be updated to point to the\n * terminating char\n * @param term a 0-terminated list of terminating chars\n * @return the malloced unescaped string, which must be av_freed by\n * the user, NULL in case of allocation failure\n */\nchar *av_get_token(const char **buf, const char *term);\n\n/**\n * Split the string into several tokens which can be accessed by\n * successive calls to av_strtok().\n *\n * A token is defined as a sequence of characters not belonging to the\n * set specified in delim.\n *\n * On the first call to av_strtok(), s should point to the string to\n * parse, and the value of saveptr is ignored. In subsequent calls, s\n * should be NULL, and saveptr should be unchanged since the previous\n * call.\n *\n * This function is similar to strtok_r() defined in POSIX.1.\n *\n * @param s the string to parse, may be NULL\n * @param delim 0-terminated list of token delimiters, must be non-NULL\n * @param saveptr user-provided pointer which points to stored\n * information necessary for av_strtok() to continue scanning the same\n * string. saveptr is updated to point to the next character after the\n * first delimiter found, or to NULL if the string was terminated\n * @return the found token, or NULL when no token is found\n */\nchar *av_strtok(char *s, const char *delim, char **saveptr);\n\n/**\n * Locale-independent conversion of ASCII isdigit.\n */\nstatic inline av_const int av_isdigit(int c)\n{\n    return c >= '0' && c <= '9';\n}\n\n/**\n * Locale-independent conversion of ASCII isgraph.\n */\nstatic inline av_const int av_isgraph(int c)\n{\n    return c > 32 && c < 127;\n}\n\n/**\n * Locale-independent conversion of ASCII isspace.\n */\nstatic inline av_const int av_isspace(int c)\n{\n    return c == ' ' || c == '\\f' || c == '\\n' || c == '\\r' || c == '\\t' ||\n           c == '\\v';\n}\n\n/**\n * Locale-independent conversion of ASCII characters to uppercase.\n */\nstatic inline av_const int av_toupper(int c)\n{\n    if (c >= 'a' && c <= 'z')\n        c ^= 0x20;\n    return c;\n}\n\n/**\n * Locale-independent conversion of ASCII characters to lowercase.\n */\nstatic inline av_const int av_tolower(int c)\n{\n    if (c >= 'A' && c <= 'Z')\n        c ^= 0x20;\n    return c;\n}\n\n/**\n * Locale-independent conversion of ASCII isxdigit.\n */\nstatic inline av_const int av_isxdigit(int c)\n{\n    c = av_tolower(c);\n    return av_isdigit(c) || (c >= 'a' && c <= 'f');\n}\n\n/**\n * Locale-independent case-insensitive compare.\n * @note This means only ASCII-range characters are case-insensitive\n */\nint av_strcasecmp(const char *a, const char *b);\n\n/**\n * Locale-independent case-insensitive compare.\n * @note This means only ASCII-range characters are case-insensitive\n */\nint av_strncasecmp(const char *a, const char *b, size_t n);\n\n/**\n * Locale-independent strings replace.\n * @note This means only ASCII-range characters are replace\n */\nchar *av_strireplace(const char *str, const char *from, const char *to);\n\n/**\n * Thread safe basename.\n * @param path the path, on DOS both \\ and / are considered separators.\n * @return pointer to the basename substring.\n */\nconst char *av_basename(const char *path);\n\n/**\n * Thread safe dirname.\n * @param path the path, on DOS both \\ and / are considered separators.\n * @return the path with the separator replaced by the string terminator or \".\".\n * @note the function may change the input string.\n */\nconst char *av_dirname(char *path);\n\n/**\n * Match instances of a name in a comma-separated list of names.\n * List entries are checked from the start to the end of the names list,\n * the first match ends further processing. If an entry prefixed with '-'\n * matches, then 0 is returned. The \"ALL\" list entry is considered to\n * match all names.\n *\n * @param name  Name to look for.\n * @param names List of names.\n * @return 1 on match, 0 otherwise.\n */\nint av_match_name(const char *name, const char *names);\n\n/**\n * Append path component to the existing path.\n * Path separator '/' is placed between when needed.\n * Resulting string have to be freed with av_free().\n * @param path      base path\n * @param component component to be appended\n * @return new path or NULL on error.\n */\nchar *av_append_path_component(const char *path, const char *component);\n\nenum AVEscapeMode {\n    AV_ESCAPE_MODE_AUTO,      ///< Use auto-selected escaping mode.\n    AV_ESCAPE_MODE_BACKSLASH, ///< Use backslash escaping.\n    AV_ESCAPE_MODE_QUOTE,     ///< Use single-quote escaping.\n};\n\n/**\n * Consider spaces special and escape them even in the middle of the\n * string.\n *\n * This is equivalent to adding the whitespace characters to the special\n * characters lists, except it is guaranteed to use the exact same list\n * of whitespace characters as the rest of libavutil.\n */\n#define AV_ESCAPE_FLAG_WHITESPACE (1 << 0)\n\n/**\n * Escape only specified special characters.\n * Without this flag, escape also any characters that may be considered\n * special by av_get_token(), such as the single quote.\n */\n#define AV_ESCAPE_FLAG_STRICT (1 << 1)\n\n/**\n * Escape string in src, and put the escaped string in an allocated\n * string in *dst, which must be freed with av_free().\n *\n * @param dst           pointer where an allocated string is put\n * @param src           string to escape, must be non-NULL\n * @param special_chars string containing the special characters which\n *                      need to be escaped, can be NULL\n * @param mode          escape mode to employ, see AV_ESCAPE_MODE_* macros.\n *                      Any unknown value for mode will be considered equivalent to\n *                      AV_ESCAPE_MODE_BACKSLASH, but this behaviour can change without\n *                      notice.\n * @param flags         flags which control how to escape, see AV_ESCAPE_FLAG_ macros\n * @return the length of the allocated string, or a negative error code in case of error\n * @see av_bprint_escape()\n */\nav_warn_unused_result\nint av_escape(char **dst, const char *src, const char *special_chars,\n              enum AVEscapeMode mode, int flags);\n\n#define AV_UTF8_FLAG_ACCEPT_INVALID_BIG_CODES          1 ///< accept codepoints over 0x10FFFF\n#define AV_UTF8_FLAG_ACCEPT_NON_CHARACTERS             2 ///< accept non-characters - 0xFFFE and 0xFFFF\n#define AV_UTF8_FLAG_ACCEPT_SURROGATES                 4 ///< accept UTF-16 surrogates codes\n#define AV_UTF8_FLAG_EXCLUDE_XML_INVALID_CONTROL_CODES 8 ///< exclude control codes not accepted by XML\n\n#define AV_UTF8_FLAG_ACCEPT_ALL \\\n    AV_UTF8_FLAG_ACCEPT_INVALID_BIG_CODES|AV_UTF8_FLAG_ACCEPT_NON_CHARACTERS|AV_UTF8_FLAG_ACCEPT_SURROGATES\n\n/**\n * Read and decode a single UTF-8 code point (character) from the\n * buffer in *buf, and update *buf to point to the next byte to\n * decode.\n *\n * In case of an invalid byte sequence, the pointer will be updated to\n * the next byte after the invalid sequence and the function will\n * return an error code.\n *\n * Depending on the specified flags, the function will also fail in\n * case the decoded code point does not belong to a valid range.\n *\n * @note For speed-relevant code a carefully implemented use of\n * GET_UTF8() may be preferred.\n *\n * @param codep   pointer used to return the parsed code in case of success.\n *                The value in *codep is set even in case the range check fails.\n * @param bufp    pointer to the address the first byte of the sequence\n *                to decode, updated by the function to point to the\n *                byte next after the decoded sequence\n * @param buf_end pointer to the end of the buffer, points to the next\n *                byte past the last in the buffer. This is used to\n *                avoid buffer overreads (in case of an unfinished\n *                UTF-8 sequence towards the end of the buffer).\n * @param flags   a collection of AV_UTF8_FLAG_* flags\n * @return >= 0 in case a sequence was successfully read, a negative\n * value in case of invalid sequence\n */\nav_warn_unused_result\nint av_utf8_decode(int32_t *codep, const uint8_t **bufp, const uint8_t *buf_end,\n                   unsigned int flags);\n\n/**\n * Check if a name is in a list.\n * @returns 0 if not found, or the 1 based index where it has been found in the\n *            list.\n */\nint av_match_list(const char *name, const char *list, char separator);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_AVSTRING_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/avutil.h",
    "content": "/*\n * copyright (c) 2006 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_AVUTIL_H\n#define AVUTIL_AVUTIL_H\n\n/**\n * @file\n * @ingroup lavu\n * Convenience header that includes @ref lavu \"libavutil\"'s core.\n */\n\n/**\n * @mainpage\n *\n * @section ffmpeg_intro Introduction\n *\n * This document describes the usage of the different libraries\n * provided by FFmpeg.\n *\n * @li @ref libavc \"libavcodec\" encoding/decoding library\n * @li @ref lavfi \"libavfilter\" graph-based frame editing library\n * @li @ref libavf \"libavformat\" I/O and muxing/demuxing library\n * @li @ref lavd \"libavdevice\" special devices muxing/demuxing library\n * @li @ref lavu \"libavutil\" common utility library\n * @li @ref lswr \"libswresample\" audio resampling, format conversion and mixing\n * @li @ref lpp  \"libpostproc\" post processing library\n * @li @ref libsws \"libswscale\" color conversion and scaling library\n *\n * @section ffmpeg_versioning Versioning and compatibility\n *\n * Each of the FFmpeg libraries contains a version.h header, which defines a\n * major, minor and micro version number with the\n * <em>LIBRARYNAME_VERSION_{MAJOR,MINOR,MICRO}</em> macros. The major version\n * number is incremented with backward incompatible changes - e.g. removing\n * parts of the public API, reordering public struct members, etc. The minor\n * version number is incremented for backward compatible API changes or major\n * new features - e.g. adding a new public function or a new decoder. The micro\n * version number is incremented for smaller changes that a calling program\n * might still want to check for - e.g. changing behavior in a previously\n * unspecified situation.\n *\n * FFmpeg guarantees backward API and ABI compatibility for each library as long\n * as its major version number is unchanged. This means that no public symbols\n * will be removed or renamed. Types and names of the public struct members and\n * values of public macros and enums will remain the same (unless they were\n * explicitly declared as not part of the public API). Documented behavior will\n * not change.\n *\n * In other words, any correct program that works with a given FFmpeg snapshot\n * should work just as well without any changes with any later snapshot with the\n * same major versions. This applies to both rebuilding the program against new\n * FFmpeg versions or to replacing the dynamic FFmpeg libraries that a program\n * links against.\n *\n * However, new public symbols may be added and new members may be appended to\n * public structs whose size is not part of public ABI (most public structs in\n * FFmpeg). New macros and enum values may be added. Behavior in undocumented\n * situations may change slightly (and be documented). All those are accompanied\n * by an entry in doc/APIchanges and incrementing either the minor or micro\n * version number.\n */\n\n/**\n * @defgroup lavu libavutil\n * Common code shared across all FFmpeg libraries.\n *\n * @note\n * libavutil is designed to be modular. In most cases, in order to use the\n * functions provided by one component of libavutil you must explicitly include\n * the specific header containing that feature. If you are only using\n * media-related components, you could simply include libavutil/avutil.h, which\n * brings in most of the \"core\" components.\n *\n * @{\n *\n * @defgroup lavu_crypto Crypto and Hashing\n *\n * @{\n * @}\n *\n * @defgroup lavu_math Mathematics\n * @{\n *\n * @}\n *\n * @defgroup lavu_string String Manipulation\n *\n * @{\n *\n * @}\n *\n * @defgroup lavu_mem Memory Management\n *\n * @{\n *\n * @}\n *\n * @defgroup lavu_data Data Structures\n * @{\n *\n * @}\n *\n * @defgroup lavu_video Video related\n *\n * @{\n *\n * @}\n *\n * @defgroup lavu_audio Audio related\n *\n * @{\n *\n * @}\n *\n * @defgroup lavu_error Error Codes\n *\n * @{\n *\n * @}\n *\n * @defgroup lavu_log Logging Facility\n *\n * @{\n *\n * @}\n *\n * @defgroup lavu_misc Other\n *\n * @{\n *\n * @defgroup preproc_misc Preprocessor String Macros\n *\n * @{\n *\n * @}\n *\n * @defgroup version_utils Library Version Macros\n *\n * @{\n *\n * @}\n */\n\n\n/**\n * @addtogroup lavu_ver\n * @{\n */\n\n/**\n * Return the LIBAVUTIL_VERSION_INT constant.\n */\nunsigned avutil_version(void);\n\n/**\n * Return an informative version string. This usually is the actual release\n * version number or a git commit description. This string has no fixed format\n * and can change any time. It should never be parsed by code.\n */\nconst char *av_version_info(void);\n\n/**\n * Return the libavutil build-time configuration.\n */\nconst char *avutil_configuration(void);\n\n/**\n * Return the libavutil license.\n */\nconst char *avutil_license(void);\n\n/**\n * @}\n */\n\n/**\n * @addtogroup lavu_media Media Type\n * @brief Media Type\n */\n\nenum AVMediaType {\n    AVMEDIA_TYPE_UNKNOWN = -1,  ///< Usually treated as AVMEDIA_TYPE_DATA\n    AVMEDIA_TYPE_VIDEO,\n    AVMEDIA_TYPE_AUDIO,\n    AVMEDIA_TYPE_DATA,          ///< Opaque data information usually continuous\n    AVMEDIA_TYPE_SUBTITLE,\n    AVMEDIA_TYPE_ATTACHMENT,    ///< Opaque data information usually sparse\n    AVMEDIA_TYPE_NB\n};\n\n/**\n * Return a string describing the media_type enum, NULL if media_type\n * is unknown.\n */\nconst char *av_get_media_type_string(enum AVMediaType media_type);\n\n/**\n * @defgroup lavu_const Constants\n * @{\n *\n * @defgroup lavu_enc Encoding specific\n *\n * @note those definition should move to avcodec\n * @{\n */\n\n#define FF_LAMBDA_SHIFT 7\n#define FF_LAMBDA_SCALE (1<<FF_LAMBDA_SHIFT)\n#define FF_QP2LAMBDA 118 ///< factor to convert from H.263 QP to lambda\n#define FF_LAMBDA_MAX (256*128-1)\n\n#define FF_QUALITY_SCALE FF_LAMBDA_SCALE //FIXME maybe remove\n\n/**\n * @}\n * @defgroup lavu_time Timestamp specific\n *\n * FFmpeg internal timebase and timestamp definitions\n *\n * @{\n */\n\n/**\n * @brief Undefined timestamp value\n *\n * Usually reported by demuxer that work on containers that do not provide\n * either pts or dts.\n */\n\n#define AV_NOPTS_VALUE          ((int64_t)UINT64_C(0x8000000000000000))\n\n/**\n * Internal time base represented as integer\n */\n\n#define AV_TIME_BASE            1000000\n\n/**\n * Internal time base represented as fractional value\n */\n\n#define AV_TIME_BASE_Q          (AVRational){1, AV_TIME_BASE}\n\n/**\n * @}\n * @}\n * @defgroup lavu_picture Image related\n *\n * AVPicture types, pixel formats and basic image planes manipulation.\n *\n * @{\n */\n\nenum AVPictureType {\n    AV_PICTURE_TYPE_NONE = 0, ///< Undefined\n    AV_PICTURE_TYPE_I,     ///< Intra\n    AV_PICTURE_TYPE_P,     ///< Predicted\n    AV_PICTURE_TYPE_B,     ///< Bi-dir predicted\n    AV_PICTURE_TYPE_S,     ///< S(GMC)-VOP MPEG-4\n    AV_PICTURE_TYPE_SI,    ///< Switching Intra\n    AV_PICTURE_TYPE_SP,    ///< Switching Predicted\n    AV_PICTURE_TYPE_BI,    ///< BI type\n};\n\n/**\n * Return a single letter to describe the given picture type\n * pict_type.\n *\n * @param[in] pict_type the picture type @return a single character\n * representing the picture type, '?' if pict_type is unknown\n */\nchar av_get_picture_type_char(enum AVPictureType pict_type);\n\n/**\n * @}\n */\n\n#include \"common.h\"\n#include \"error.h\"\n#include \"rational.h\"\n#include \"version.h\"\n#include \"macros.h\"\n#include \"mathematics.h\"\n#include \"log.h\"\n#include \"pixfmt.h\"\n\n/**\n * Return x default pointer in case p is NULL.\n */\nstatic inline void *av_x_if_null(const void *p, const void *x)\n{\n    return (void *)(intptr_t)(p ? p : x);\n}\n\n/**\n * Compute the length of an integer list.\n *\n * @param elsize  size in bytes of each list element (only 1, 2, 4 or 8)\n * @param term    list terminator (usually 0 or -1)\n * @param list    pointer to the list\n * @return  length of the list, in elements, not counting the terminator\n */\nunsigned av_int_list_length_for_size(unsigned elsize,\n                                     const void *list, uint64_t term) av_pure;\n\n/**\n * Compute the length of an integer list.\n *\n * @param term  list terminator (usually 0 or -1)\n * @param list  pointer to the list\n * @return  length of the list, in elements, not counting the terminator\n */\n#define av_int_list_length(list, term) \\\n    av_int_list_length_for_size(sizeof(*(list)), list, term)\n\n/**\n * Open a file using a UTF-8 filename.\n * The API of this function matches POSIX fopen(), errors are returned through\n * errno.\n */\nFILE *av_fopen_utf8(const char *path, const char *mode);\n\n/**\n * Return the fractional representation of the internal time base.\n */\nAVRational av_get_time_base_q(void);\n\n#define AV_FOURCC_MAX_STRING_SIZE 32\n\n#define av_fourcc2str(fourcc) av_fourcc_make_string((char[AV_FOURCC_MAX_STRING_SIZE]){0}, fourcc)\n\n/**\n * Fill the provided buffer with a string containing a FourCC (four-character\n * code) representation.\n *\n * @param buf    a buffer with size in bytes of at least AV_FOURCC_MAX_STRING_SIZE\n * @param fourcc the fourcc to represent\n * @return the buffer in input\n */\nchar *av_fourcc_make_string(char *buf, uint32_t fourcc);\n\n/**\n * @}\n * @}\n */\n\n#endif /* AVUTIL_AVUTIL_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/base64.h",
    "content": "/*\n * Copyright (c) 2006 Ryan Martell. (rdm4@martellventures.com)\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_BASE64_H\n#define AVUTIL_BASE64_H\n\n#include <stdint.h>\n\n/**\n * @defgroup lavu_base64 Base64\n * @ingroup lavu_crypto\n * @{\n */\n\n/**\n * Decode a base64-encoded string.\n *\n * @param out      buffer for decoded data\n * @param in       null-terminated input string\n * @param out_size size in bytes of the out buffer, must be at\n *                 least 3/4 of the length of in, that is AV_BASE64_DECODE_SIZE(strlen(in))\n * @return         number of bytes written, or a negative value in case of\n *                 invalid input\n */\nint av_base64_decode(uint8_t *out, const char *in, int out_size);\n\n/**\n * Calculate the output size in bytes needed to decode a base64 string\n * with length x to a data buffer.\n */\n#define AV_BASE64_DECODE_SIZE(x) ((x) * 3LL / 4)\n\n/**\n * Encode data to base64 and null-terminate.\n *\n * @param out      buffer for encoded data\n * @param out_size size in bytes of the out buffer (including the\n *                 null terminator), must be at least AV_BASE64_SIZE(in_size)\n * @param in       input buffer containing the data to encode\n * @param in_size  size in bytes of the in buffer\n * @return         out or NULL in case of error\n */\nchar *av_base64_encode(char *out, int out_size, const uint8_t *in, int in_size);\n\n/**\n * Calculate the output size needed to base64-encode x bytes to a\n * null-terminated string.\n */\n#define AV_BASE64_SIZE(x)  (((x)+2) / 3 * 4 + 1)\n\n /**\n  * @}\n  */\n\n#endif /* AVUTIL_BASE64_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/blowfish.h",
    "content": "/*\n * Blowfish algorithm\n * Copyright (c) 2012 Samuel Pitoiset\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_BLOWFISH_H\n#define AVUTIL_BLOWFISH_H\n\n#include <stdint.h>\n\n/**\n * @defgroup lavu_blowfish Blowfish\n * @ingroup lavu_crypto\n * @{\n */\n\n#define AV_BF_ROUNDS 16\n\ntypedef struct AVBlowfish {\n    uint32_t p[AV_BF_ROUNDS + 2];\n    uint32_t s[4][256];\n} AVBlowfish;\n\n/**\n * Allocate an AVBlowfish context.\n */\nAVBlowfish *av_blowfish_alloc(void);\n\n/**\n * Initialize an AVBlowfish context.\n *\n * @param ctx an AVBlowfish context\n * @param key a key\n * @param key_len length of the key\n */\nvoid av_blowfish_init(struct AVBlowfish *ctx, const uint8_t *key, int key_len);\n\n/**\n * Encrypt or decrypt a buffer using a previously initialized context.\n *\n * @param ctx an AVBlowfish context\n * @param xl left four bytes halves of input to be encrypted\n * @param xr right four bytes halves of input to be encrypted\n * @param decrypt 0 for encryption, 1 for decryption\n */\nvoid av_blowfish_crypt_ecb(struct AVBlowfish *ctx, uint32_t *xl, uint32_t *xr,\n                           int decrypt);\n\n/**\n * Encrypt or decrypt a buffer using a previously initialized context.\n *\n * @param ctx an AVBlowfish context\n * @param dst destination array, can be equal to src\n * @param src source array, can be equal to dst\n * @param count number of 8 byte blocks\n * @param iv initialization vector for CBC mode, if NULL ECB will be used\n * @param decrypt 0 for encryption, 1 for decryption\n */\nvoid av_blowfish_crypt(struct AVBlowfish *ctx, uint8_t *dst, const uint8_t *src,\n                       int count, uint8_t *iv, int decrypt);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_BLOWFISH_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/bprint.h",
    "content": "/*\n * Copyright (c) 2012 Nicolas George\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_BPRINT_H\n#define AVUTIL_BPRINT_H\n\n#include <stdarg.h>\n\n#include \"attributes.h\"\n#include \"avstring.h\"\n\n/**\n * Define a structure with extra padding to a fixed size\n * This helps ensuring binary compatibility with future versions.\n */\n\n#define FF_PAD_STRUCTURE(name, size, ...) \\\nstruct ff_pad_helper_##name { __VA_ARGS__ }; \\\ntypedef struct name { \\\n    __VA_ARGS__ \\\n    char reserved_padding[size - sizeof(struct ff_pad_helper_##name)]; \\\n} name;\n\n/**\n * Buffer to print data progressively\n *\n * The string buffer grows as necessary and is always 0-terminated.\n * The content of the string is never accessed, and thus is\n * encoding-agnostic and can even hold binary data.\n *\n * Small buffers are kept in the structure itself, and thus require no\n * memory allocation at all (unless the contents of the buffer is needed\n * after the structure goes out of scope). This is almost as lightweight as\n * declaring a local \"char buf[512]\".\n *\n * The length of the string can go beyond the allocated size: the buffer is\n * then truncated, but the functions still keep account of the actual total\n * length.\n *\n * In other words, buf->len can be greater than buf->size and records the\n * total length of what would have been to the buffer if there had been\n * enough memory.\n *\n * Append operations do not need to be tested for failure: if a memory\n * allocation fails, data stop being appended to the buffer, but the length\n * is still updated. This situation can be tested with\n * av_bprint_is_complete().\n *\n * The size_max field determines several possible behaviours:\n *\n * size_max = -1 (= UINT_MAX) or any large value will let the buffer be\n * reallocated as necessary, with an amortized linear cost.\n *\n * size_max = 0 prevents writing anything to the buffer: only the total\n * length is computed. The write operations can then possibly be repeated in\n * a buffer with exactly the necessary size\n * (using size_init = size_max = len + 1).\n *\n * size_max = 1 is automatically replaced by the exact size available in the\n * structure itself, thus ensuring no dynamic memory allocation. The\n * internal buffer is large enough to hold a reasonable paragraph of text,\n * such as the current paragraph.\n */\n\nFF_PAD_STRUCTURE(AVBPrint, 1024,\n    char *str;         /**< string so far */\n    unsigned len;      /**< length so far */\n    unsigned size;     /**< allocated memory */\n    unsigned size_max; /**< maximum allocated memory */\n    char reserved_internal_buffer[1];\n)\n\n/**\n * Convenience macros for special values for av_bprint_init() size_max\n * parameter.\n */\n#define AV_BPRINT_SIZE_UNLIMITED  ((unsigned)-1)\n#define AV_BPRINT_SIZE_AUTOMATIC  1\n#define AV_BPRINT_SIZE_COUNT_ONLY 0\n\n/**\n * Init a print buffer.\n *\n * @param buf        buffer to init\n * @param size_init  initial size (including the final 0)\n * @param size_max   maximum size;\n *                   0 means do not write anything, just count the length;\n *                   1 is replaced by the maximum value for automatic storage;\n *                   any large value means that the internal buffer will be\n *                   reallocated as needed up to that limit; -1 is converted to\n *                   UINT_MAX, the largest limit possible.\n *                   Check also AV_BPRINT_SIZE_* macros.\n */\nvoid av_bprint_init(AVBPrint *buf, unsigned size_init, unsigned size_max);\n\n/**\n * Init a print buffer using a pre-existing buffer.\n *\n * The buffer will not be reallocated.\n *\n * @param buf     buffer structure to init\n * @param buffer  byte buffer to use for the string data\n * @param size    size of buffer\n */\nvoid av_bprint_init_for_buffer(AVBPrint *buf, char *buffer, unsigned size);\n\n/**\n * Append a formatted string to a print buffer.\n */\nvoid av_bprintf(AVBPrint *buf, const char *fmt, ...) av_printf_format(2, 3);\n\n/**\n * Append a formatted string to a print buffer.\n */\nvoid av_vbprintf(AVBPrint *buf, const char *fmt, va_list vl_arg);\n\n/**\n * Append char c n times to a print buffer.\n */\nvoid av_bprint_chars(AVBPrint *buf, char c, unsigned n);\n\n/**\n * Append data to a print buffer.\n *\n * param buf  bprint buffer to use\n * param data pointer to data\n * param size size of data\n */\nvoid av_bprint_append_data(AVBPrint *buf, const char *data, unsigned size);\n\nstruct tm;\n/**\n * Append a formatted date and time to a print buffer.\n *\n * param buf  bprint buffer to use\n * param fmt  date and time format string, see strftime()\n * param tm   broken-down time structure to translate\n *\n * @note due to poor design of the standard strftime function, it may\n * produce poor results if the format string expands to a very long text and\n * the bprint buffer is near the limit stated by the size_max option.\n */\nvoid av_bprint_strftime(AVBPrint *buf, const char *fmt, const struct tm *tm);\n\n/**\n * Allocate bytes in the buffer for external use.\n *\n * @param[in]  buf          buffer structure\n * @param[in]  size         required size\n * @param[out] mem          pointer to the memory area\n * @param[out] actual_size  size of the memory area after allocation;\n *                          can be larger or smaller than size\n */\nvoid av_bprint_get_buffer(AVBPrint *buf, unsigned size,\n                          unsigned char **mem, unsigned *actual_size);\n\n/**\n * Reset the string to \"\" but keep internal allocated data.\n */\nvoid av_bprint_clear(AVBPrint *buf);\n\n/**\n * Test if the print buffer is complete (not truncated).\n *\n * It may have been truncated due to a memory allocation failure\n * or the size_max limit (compare size and size_max if necessary).\n */\nstatic inline int av_bprint_is_complete(const AVBPrint *buf)\n{\n    return buf->len < buf->size;\n}\n\n/**\n * Finalize a print buffer.\n *\n * The print buffer can no longer be used afterwards,\n * but the len and size fields are still valid.\n *\n * @arg[out] ret_str  if not NULL, used to return a permanent copy of the\n *                    buffer contents, or NULL if memory allocation fails;\n *                    if NULL, the buffer is discarded and freed\n * @return  0 for success or error code (probably AVERROR(ENOMEM))\n */\nint av_bprint_finalize(AVBPrint *buf, char **ret_str);\n\n/**\n * Escape the content in src and append it to dstbuf.\n *\n * @param dstbuf        already inited destination bprint buffer\n * @param src           string containing the text to escape\n * @param special_chars string containing the special characters which\n *                      need to be escaped, can be NULL\n * @param mode          escape mode to employ, see AV_ESCAPE_MODE_* macros.\n *                      Any unknown value for mode will be considered equivalent to\n *                      AV_ESCAPE_MODE_BACKSLASH, but this behaviour can change without\n *                      notice.\n * @param flags         flags which control how to escape, see AV_ESCAPE_FLAG_* macros\n */\nvoid av_bprint_escape(AVBPrint *dstbuf, const char *src, const char *special_chars,\n                      enum AVEscapeMode mode, int flags);\n\n#endif /* AVUTIL_BPRINT_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/bswap.h",
    "content": "/*\n * copyright (c) 2006 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * byte swapping routines\n */\n\n#ifndef AVUTIL_BSWAP_H\n#define AVUTIL_BSWAP_H\n\n#include <stdint.h>\n#include \"libavutil/avconfig.h\"\n#include \"attributes.h\"\n\n#ifdef HAVE_AV_CONFIG_H\n\n#include \"config.h\"\n\n#if   ARCH_AARCH64\n#   include \"aarch64/bswap.h\"\n#elif ARCH_ARM\n#   include \"arm/bswap.h\"\n#elif ARCH_AVR32\n#   include \"avr32/bswap.h\"\n#elif ARCH_SH4\n#   include \"sh4/bswap.h\"\n#elif ARCH_X86\n#   include \"x86/bswap.h\"\n#endif\n\n#endif /* HAVE_AV_CONFIG_H */\n\n#define AV_BSWAP16C(x) (((x) << 8 & 0xff00)  | ((x) >> 8 & 0x00ff))\n#define AV_BSWAP32C(x) (AV_BSWAP16C(x) << 16 | AV_BSWAP16C((x) >> 16))\n#define AV_BSWAP64C(x) (AV_BSWAP32C(x) << 32 | AV_BSWAP32C((x) >> 32))\n\n#define AV_BSWAPC(s, x) AV_BSWAP##s##C(x)\n\n#ifndef av_bswap16\nstatic av_always_inline av_const uint16_t av_bswap16(uint16_t x)\n{\n    x= (x>>8) | (x<<8);\n    return x;\n}\n#endif\n\n#ifndef av_bswap32\nstatic av_always_inline av_const uint32_t av_bswap32(uint32_t x)\n{\n    return AV_BSWAP32C(x);\n}\n#endif\n\n#ifndef av_bswap64\nstatic inline uint64_t av_const av_bswap64(uint64_t x)\n{\n    return (uint64_t)av_bswap32(x) << 32 | av_bswap32(x >> 32);\n}\n#endif\n\n// be2ne ... big-endian to native-endian\n// le2ne ... little-endian to native-endian\n\n#if AV_HAVE_BIGENDIAN\n#define av_be2ne16(x) (x)\n#define av_be2ne32(x) (x)\n#define av_be2ne64(x) (x)\n#define av_le2ne16(x) av_bswap16(x)\n#define av_le2ne32(x) av_bswap32(x)\n#define av_le2ne64(x) av_bswap64(x)\n#define AV_BE2NEC(s, x) (x)\n#define AV_LE2NEC(s, x) AV_BSWAPC(s, x)\n#else\n#define av_be2ne16(x) av_bswap16(x)\n#define av_be2ne32(x) av_bswap32(x)\n#define av_be2ne64(x) av_bswap64(x)\n#define av_le2ne16(x) (x)\n#define av_le2ne32(x) (x)\n#define av_le2ne64(x) (x)\n#define AV_BE2NEC(s, x) AV_BSWAPC(s, x)\n#define AV_LE2NEC(s, x) (x)\n#endif\n\n#define AV_BE2NE16C(x) AV_BE2NEC(16, x)\n#define AV_BE2NE32C(x) AV_BE2NEC(32, x)\n#define AV_BE2NE64C(x) AV_BE2NEC(64, x)\n#define AV_LE2NE16C(x) AV_LE2NEC(16, x)\n#define AV_LE2NE32C(x) AV_LE2NEC(32, x)\n#define AV_LE2NE64C(x) AV_LE2NEC(64, x)\n\n#endif /* AVUTIL_BSWAP_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/buffer.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * @ingroup lavu_buffer\n * refcounted data buffer API\n */\n\n#ifndef AVUTIL_BUFFER_H\n#define AVUTIL_BUFFER_H\n\n#include <stdint.h>\n\n/**\n * @defgroup lavu_buffer AVBuffer\n * @ingroup lavu_data\n *\n * @{\n * AVBuffer is an API for reference-counted data buffers.\n *\n * There are two core objects in this API -- AVBuffer and AVBufferRef. AVBuffer\n * represents the data buffer itself; it is opaque and not meant to be accessed\n * by the caller directly, but only through AVBufferRef. However, the caller may\n * e.g. compare two AVBuffer pointers to check whether two different references\n * are describing the same data buffer. AVBufferRef represents a single\n * reference to an AVBuffer and it is the object that may be manipulated by the\n * caller directly.\n *\n * There are two functions provided for creating a new AVBuffer with a single\n * reference -- av_buffer_alloc() to just allocate a new buffer, and\n * av_buffer_create() to wrap an existing array in an AVBuffer. From an existing\n * reference, additional references may be created with av_buffer_ref().\n * Use av_buffer_unref() to free a reference (this will automatically free the\n * data once all the references are freed).\n *\n * The convention throughout this API and the rest of FFmpeg is such that the\n * buffer is considered writable if there exists only one reference to it (and\n * it has not been marked as read-only). The av_buffer_is_writable() function is\n * provided to check whether this is true and av_buffer_make_writable() will\n * automatically create a new writable buffer when necessary.\n * Of course nothing prevents the calling code from violating this convention,\n * however that is safe only when all the existing references are under its\n * control.\n *\n * @note Referencing and unreferencing the buffers is thread-safe and thus\n * may be done from multiple threads simultaneously without any need for\n * additional locking.\n *\n * @note Two different references to the same buffer can point to different\n * parts of the buffer (i.e. their AVBufferRef.data will not be equal).\n */\n\n/**\n * A reference counted buffer type. It is opaque and is meant to be used through\n * references (AVBufferRef).\n */\ntypedef struct AVBuffer AVBuffer;\n\n/**\n * A reference to a data buffer.\n *\n * The size of this struct is not a part of the public ABI and it is not meant\n * to be allocated directly.\n */\ntypedef struct AVBufferRef {\n    AVBuffer *buffer;\n\n    /**\n     * The data buffer. It is considered writable if and only if\n     * this is the only reference to the buffer, in which case\n     * av_buffer_is_writable() returns 1.\n     */\n    uint8_t *data;\n    /**\n     * Size of data in bytes.\n     */\n    int      size;\n} AVBufferRef;\n\n/**\n * Allocate an AVBuffer of the given size using av_malloc().\n *\n * @return an AVBufferRef of given size or NULL when out of memory\n */\nAVBufferRef *av_buffer_alloc(int size);\n\n/**\n * Same as av_buffer_alloc(), except the returned buffer will be initialized\n * to zero.\n */\nAVBufferRef *av_buffer_allocz(int size);\n\n/**\n * Always treat the buffer as read-only, even when it has only one\n * reference.\n */\n#define AV_BUFFER_FLAG_READONLY (1 << 0)\n\n/**\n * Create an AVBuffer from an existing array.\n *\n * If this function is successful, data is owned by the AVBuffer. The caller may\n * only access data through the returned AVBufferRef and references derived from\n * it.\n * If this function fails, data is left untouched.\n * @param data   data array\n * @param size   size of data in bytes\n * @param free   a callback for freeing this buffer's data\n * @param opaque parameter to be got for processing or passed to free\n * @param flags  a combination of AV_BUFFER_FLAG_*\n *\n * @return an AVBufferRef referring to data on success, NULL on failure.\n */\nAVBufferRef *av_buffer_create(uint8_t *data, int size,\n                              void (*free)(void *opaque, uint8_t *data),\n                              void *opaque, int flags);\n\n/**\n * Default free callback, which calls av_free() on the buffer data.\n * This function is meant to be passed to av_buffer_create(), not called\n * directly.\n */\nvoid av_buffer_default_free(void *opaque, uint8_t *data);\n\n/**\n * Create a new reference to an AVBuffer.\n *\n * @return a new AVBufferRef referring to the same AVBuffer as buf or NULL on\n * failure.\n */\nAVBufferRef *av_buffer_ref(AVBufferRef *buf);\n\n/**\n * Free a given reference and automatically free the buffer if there are no more\n * references to it.\n *\n * @param buf the reference to be freed. The pointer is set to NULL on return.\n */\nvoid av_buffer_unref(AVBufferRef **buf);\n\n/**\n * @return 1 if the caller may write to the data referred to by buf (which is\n * true if and only if buf is the only reference to the underlying AVBuffer).\n * Return 0 otherwise.\n * A positive answer is valid until av_buffer_ref() is called on buf.\n */\nint av_buffer_is_writable(const AVBufferRef *buf);\n\n/**\n * @return the opaque parameter set by av_buffer_create.\n */\nvoid *av_buffer_get_opaque(const AVBufferRef *buf);\n\nint av_buffer_get_ref_count(const AVBufferRef *buf);\n\n/**\n * Create a writable reference from a given buffer reference, avoiding data copy\n * if possible.\n *\n * @param buf buffer reference to make writable. On success, buf is either left\n *            untouched, or it is unreferenced and a new writable AVBufferRef is\n *            written in its place. On failure, buf is left untouched.\n * @return 0 on success, a negative AVERROR on failure.\n */\nint av_buffer_make_writable(AVBufferRef **buf);\n\n/**\n * Reallocate a given buffer.\n *\n * @param buf  a buffer reference to reallocate. On success, buf will be\n *             unreferenced and a new reference with the required size will be\n *             written in its place. On failure buf will be left untouched. *buf\n *             may be NULL, then a new buffer is allocated.\n * @param size required new buffer size.\n * @return 0 on success, a negative AVERROR on failure.\n *\n * @note the buffer is actually reallocated with av_realloc() only if it was\n * initially allocated through av_buffer_realloc(NULL) and there is only one\n * reference to it (i.e. the one passed to this function). In all other cases\n * a new buffer is allocated and the data is copied.\n */\nint av_buffer_realloc(AVBufferRef **buf, int size);\n\n/**\n * @}\n */\n\n/**\n * @defgroup lavu_bufferpool AVBufferPool\n * @ingroup lavu_data\n *\n * @{\n * AVBufferPool is an API for a lock-free thread-safe pool of AVBuffers.\n *\n * Frequently allocating and freeing large buffers may be slow. AVBufferPool is\n * meant to solve this in cases when the caller needs a set of buffers of the\n * same size (the most obvious use case being buffers for raw video or audio\n * frames).\n *\n * At the beginning, the user must call av_buffer_pool_init() to create the\n * buffer pool. Then whenever a buffer is needed, call av_buffer_pool_get() to\n * get a reference to a new buffer, similar to av_buffer_alloc(). This new\n * reference works in all aspects the same way as the one created by\n * av_buffer_alloc(). However, when the last reference to this buffer is\n * unreferenced, it is returned to the pool instead of being freed and will be\n * reused for subsequent av_buffer_pool_get() calls.\n *\n * When the caller is done with the pool and no longer needs to allocate any new\n * buffers, av_buffer_pool_uninit() must be called to mark the pool as freeable.\n * Once all the buffers are released, it will automatically be freed.\n *\n * Allocating and releasing buffers with this API is thread-safe as long as\n * either the default alloc callback is used, or the user-supplied one is\n * thread-safe.\n */\n\n/**\n * The buffer pool. This structure is opaque and not meant to be accessed\n * directly. It is allocated with av_buffer_pool_init() and freed with\n * av_buffer_pool_uninit().\n */\ntypedef struct AVBufferPool AVBufferPool;\n\n/**\n * Allocate and initialize a buffer pool.\n *\n * @param size size of each buffer in this pool\n * @param alloc a function that will be used to allocate new buffers when the\n * pool is empty. May be NULL, then the default allocator will be used\n * (av_buffer_alloc()).\n * @return newly created buffer pool on success, NULL on error.\n */\nAVBufferPool *av_buffer_pool_init(int size, AVBufferRef* (*alloc)(int size));\n\n/**\n * Allocate and initialize a buffer pool with a more complex allocator.\n *\n * @param size size of each buffer in this pool\n * @param opaque arbitrary user data used by the allocator\n * @param alloc a function that will be used to allocate new buffers when the\n *              pool is empty.\n * @param pool_free a function that will be called immediately before the pool\n *                  is freed. I.e. after av_buffer_pool_uninit() is called\n *                  by the caller and all the frames are returned to the pool\n *                  and freed. It is intended to uninitialize the user opaque\n *                  data.\n * @return newly created buffer pool on success, NULL on error.\n */\nAVBufferPool *av_buffer_pool_init2(int size, void *opaque,\n                                   AVBufferRef* (*alloc)(void *opaque, int size),\n                                   void (*pool_free)(void *opaque));\n\n/**\n * Mark the pool as being available for freeing. It will actually be freed only\n * once all the allocated buffers associated with the pool are released. Thus it\n * is safe to call this function while some of the allocated buffers are still\n * in use.\n *\n * @param pool pointer to the pool to be freed. It will be set to NULL.\n */\nvoid av_buffer_pool_uninit(AVBufferPool **pool);\n\n/**\n * Allocate a new AVBuffer, reusing an old buffer from the pool when available.\n * This function may be called simultaneously from multiple threads.\n *\n * @return a reference to the new buffer on success, NULL on error.\n */\nAVBufferRef *av_buffer_pool_get(AVBufferPool *pool);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_BUFFER_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/camellia.h",
    "content": "/*\n * An implementation of the CAMELLIA algorithm as mentioned in RFC3713\n * Copyright (c) 2014 Supraja Meedinti\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_CAMELLIA_H\n#define AVUTIL_CAMELLIA_H\n\n#include <stdint.h>\n\n\n/**\n  * @file\n  * @brief Public header for libavutil CAMELLIA algorithm\n  * @defgroup lavu_camellia CAMELLIA\n  * @ingroup lavu_crypto\n  * @{\n  */\n\nextern const int av_camellia_size;\n\nstruct AVCAMELLIA;\n\n/**\n  * Allocate an AVCAMELLIA context\n  * To free the struct: av_free(ptr)\n  */\nstruct AVCAMELLIA *av_camellia_alloc(void);\n\n/**\n  * Initialize an AVCAMELLIA context.\n  *\n  * @param ctx an AVCAMELLIA context\n  * @param key a key of 16, 24, 32 bytes used for encryption/decryption\n  * @param key_bits number of keybits: possible are 128, 192, 256\n */\nint av_camellia_init(struct AVCAMELLIA *ctx, const uint8_t *key, int key_bits);\n\n/**\n  * Encrypt or decrypt a buffer using a previously initialized context\n  *\n  * @param ctx an AVCAMELLIA context\n  * @param dst destination array, can be equal to src\n  * @param src source array, can be equal to dst\n  * @param count number of 16 byte blocks\n  * @paran iv initialization vector for CBC mode, NULL for ECB mode\n  * @param decrypt 0 for encryption, 1 for decryption\n */\nvoid av_camellia_crypt(struct AVCAMELLIA *ctx, uint8_t *dst, const uint8_t *src, int count, uint8_t* iv, int decrypt);\n\n/**\n * @}\n */\n#endif /* AVUTIL_CAMELLIA_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/cast5.h",
    "content": "/*\n * An implementation of the CAST128 algorithm as mentioned in RFC2144\n * Copyright (c) 2014 Supraja Meedinti\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_CAST5_H\n#define AVUTIL_CAST5_H\n\n#include <stdint.h>\n\n\n/**\n  * @file\n  * @brief Public header for libavutil CAST5 algorithm\n  * @defgroup lavu_cast5 CAST5\n  * @ingroup lavu_crypto\n  * @{\n  */\n\nextern const int av_cast5_size;\n\nstruct AVCAST5;\n\n/**\n  * Allocate an AVCAST5 context\n  * To free the struct: av_free(ptr)\n  */\nstruct AVCAST5 *av_cast5_alloc(void);\n/**\n  * Initialize an AVCAST5 context.\n  *\n  * @param ctx an AVCAST5 context\n  * @param key a key of 5,6,...16 bytes used for encryption/decryption\n  * @param key_bits number of keybits: possible are 40,48,...,128\n  * @return 0 on success, less than 0 on failure\n */\nint av_cast5_init(struct AVCAST5 *ctx, const uint8_t *key, int key_bits);\n\n/**\n  * Encrypt or decrypt a buffer using a previously initialized context, ECB mode only\n  *\n  * @param ctx an AVCAST5 context\n  * @param dst destination array, can be equal to src\n  * @param src source array, can be equal to dst\n  * @param count number of 8 byte blocks\n  * @param decrypt 0 for encryption, 1 for decryption\n */\nvoid av_cast5_crypt(struct AVCAST5 *ctx, uint8_t *dst, const uint8_t *src, int count, int decrypt);\n\n/**\n  * Encrypt or decrypt a buffer using a previously initialized context\n  *\n  * @param ctx an AVCAST5 context\n  * @param dst destination array, can be equal to src\n  * @param src source array, can be equal to dst\n  * @param count number of 8 byte blocks\n  * @param iv initialization vector for CBC mode, NULL for ECB mode\n  * @param decrypt 0 for encryption, 1 for decryption\n */\nvoid av_cast5_crypt2(struct AVCAST5 *ctx, uint8_t *dst, const uint8_t *src, int count, uint8_t *iv, int decrypt);\n/**\n * @}\n */\n#endif /* AVUTIL_CAST5_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/channel_layout.h",
    "content": "/*\n * Copyright (c) 2006 Michael Niedermayer <michaelni@gmx.at>\n * Copyright (c) 2008 Peter Ross\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_CHANNEL_LAYOUT_H\n#define AVUTIL_CHANNEL_LAYOUT_H\n\n#include <stdint.h>\n\n/**\n * @file\n * audio channel layout utility functions\n */\n\n/**\n * @addtogroup lavu_audio\n * @{\n */\n\n/**\n * @defgroup channel_masks Audio channel masks\n *\n * A channel layout is a 64-bits integer with a bit set for every channel.\n * The number of bits set must be equal to the number of channels.\n * The value 0 means that the channel layout is not known.\n * @note this data structure is not powerful enough to handle channels\n * combinations that have the same channel multiple times, such as\n * dual-mono.\n *\n * @{\n */\n#define AV_CH_FRONT_LEFT             0x00000001\n#define AV_CH_FRONT_RIGHT            0x00000002\n#define AV_CH_FRONT_CENTER           0x00000004\n#define AV_CH_LOW_FREQUENCY          0x00000008\n#define AV_CH_BACK_LEFT              0x00000010\n#define AV_CH_BACK_RIGHT             0x00000020\n#define AV_CH_FRONT_LEFT_OF_CENTER   0x00000040\n#define AV_CH_FRONT_RIGHT_OF_CENTER  0x00000080\n#define AV_CH_BACK_CENTER            0x00000100\n#define AV_CH_SIDE_LEFT              0x00000200\n#define AV_CH_SIDE_RIGHT             0x00000400\n#define AV_CH_TOP_CENTER             0x00000800\n#define AV_CH_TOP_FRONT_LEFT         0x00001000\n#define AV_CH_TOP_FRONT_CENTER       0x00002000\n#define AV_CH_TOP_FRONT_RIGHT        0x00004000\n#define AV_CH_TOP_BACK_LEFT          0x00008000\n#define AV_CH_TOP_BACK_CENTER        0x00010000\n#define AV_CH_TOP_BACK_RIGHT         0x00020000\n#define AV_CH_STEREO_LEFT            0x20000000  ///< Stereo downmix.\n#define AV_CH_STEREO_RIGHT           0x40000000  ///< See AV_CH_STEREO_LEFT.\n#define AV_CH_WIDE_LEFT              0x0000000080000000ULL\n#define AV_CH_WIDE_RIGHT             0x0000000100000000ULL\n#define AV_CH_SURROUND_DIRECT_LEFT   0x0000000200000000ULL\n#define AV_CH_SURROUND_DIRECT_RIGHT  0x0000000400000000ULL\n#define AV_CH_LOW_FREQUENCY_2        0x0000000800000000ULL\n\n/** Channel mask value used for AVCodecContext.request_channel_layout\n    to indicate that the user requests the channel order of the decoder output\n    to be the native codec channel order. */\n#define AV_CH_LAYOUT_NATIVE          0x8000000000000000ULL\n\n/**\n * @}\n * @defgroup channel_mask_c Audio channel layouts\n * @{\n * */\n#define AV_CH_LAYOUT_MONO              (AV_CH_FRONT_CENTER)\n#define AV_CH_LAYOUT_STEREO            (AV_CH_FRONT_LEFT|AV_CH_FRONT_RIGHT)\n#define AV_CH_LAYOUT_2POINT1           (AV_CH_LAYOUT_STEREO|AV_CH_LOW_FREQUENCY)\n#define AV_CH_LAYOUT_2_1               (AV_CH_LAYOUT_STEREO|AV_CH_BACK_CENTER)\n#define AV_CH_LAYOUT_SURROUND          (AV_CH_LAYOUT_STEREO|AV_CH_FRONT_CENTER)\n#define AV_CH_LAYOUT_3POINT1           (AV_CH_LAYOUT_SURROUND|AV_CH_LOW_FREQUENCY)\n#define AV_CH_LAYOUT_4POINT0           (AV_CH_LAYOUT_SURROUND|AV_CH_BACK_CENTER)\n#define AV_CH_LAYOUT_4POINT1           (AV_CH_LAYOUT_4POINT0|AV_CH_LOW_FREQUENCY)\n#define AV_CH_LAYOUT_2_2               (AV_CH_LAYOUT_STEREO|AV_CH_SIDE_LEFT|AV_CH_SIDE_RIGHT)\n#define AV_CH_LAYOUT_QUAD              (AV_CH_LAYOUT_STEREO|AV_CH_BACK_LEFT|AV_CH_BACK_RIGHT)\n#define AV_CH_LAYOUT_5POINT0           (AV_CH_LAYOUT_SURROUND|AV_CH_SIDE_LEFT|AV_CH_SIDE_RIGHT)\n#define AV_CH_LAYOUT_5POINT1           (AV_CH_LAYOUT_5POINT0|AV_CH_LOW_FREQUENCY)\n#define AV_CH_LAYOUT_5POINT0_BACK      (AV_CH_LAYOUT_SURROUND|AV_CH_BACK_LEFT|AV_CH_BACK_RIGHT)\n#define AV_CH_LAYOUT_5POINT1_BACK      (AV_CH_LAYOUT_5POINT0_BACK|AV_CH_LOW_FREQUENCY)\n#define AV_CH_LAYOUT_6POINT0           (AV_CH_LAYOUT_5POINT0|AV_CH_BACK_CENTER)\n#define AV_CH_LAYOUT_6POINT0_FRONT     (AV_CH_LAYOUT_2_2|AV_CH_FRONT_LEFT_OF_CENTER|AV_CH_FRONT_RIGHT_OF_CENTER)\n#define AV_CH_LAYOUT_HEXAGONAL         (AV_CH_LAYOUT_5POINT0_BACK|AV_CH_BACK_CENTER)\n#define AV_CH_LAYOUT_6POINT1           (AV_CH_LAYOUT_5POINT1|AV_CH_BACK_CENTER)\n#define AV_CH_LAYOUT_6POINT1_BACK      (AV_CH_LAYOUT_5POINT1_BACK|AV_CH_BACK_CENTER)\n#define AV_CH_LAYOUT_6POINT1_FRONT     (AV_CH_LAYOUT_6POINT0_FRONT|AV_CH_LOW_FREQUENCY)\n#define AV_CH_LAYOUT_7POINT0           (AV_CH_LAYOUT_5POINT0|AV_CH_BACK_LEFT|AV_CH_BACK_RIGHT)\n#define AV_CH_LAYOUT_7POINT0_FRONT     (AV_CH_LAYOUT_5POINT0|AV_CH_FRONT_LEFT_OF_CENTER|AV_CH_FRONT_RIGHT_OF_CENTER)\n#define AV_CH_LAYOUT_7POINT1           (AV_CH_LAYOUT_5POINT1|AV_CH_BACK_LEFT|AV_CH_BACK_RIGHT)\n#define AV_CH_LAYOUT_7POINT1_WIDE      (AV_CH_LAYOUT_5POINT1|AV_CH_FRONT_LEFT_OF_CENTER|AV_CH_FRONT_RIGHT_OF_CENTER)\n#define AV_CH_LAYOUT_7POINT1_WIDE_BACK (AV_CH_LAYOUT_5POINT1_BACK|AV_CH_FRONT_LEFT_OF_CENTER|AV_CH_FRONT_RIGHT_OF_CENTER)\n#define AV_CH_LAYOUT_OCTAGONAL         (AV_CH_LAYOUT_5POINT0|AV_CH_BACK_LEFT|AV_CH_BACK_CENTER|AV_CH_BACK_RIGHT)\n#define AV_CH_LAYOUT_HEXADECAGONAL     (AV_CH_LAYOUT_OCTAGONAL|AV_CH_WIDE_LEFT|AV_CH_WIDE_RIGHT|AV_CH_TOP_BACK_LEFT|AV_CH_TOP_BACK_RIGHT|AV_CH_TOP_BACK_CENTER|AV_CH_TOP_FRONT_CENTER|AV_CH_TOP_FRONT_LEFT|AV_CH_TOP_FRONT_RIGHT)\n#define AV_CH_LAYOUT_STEREO_DOWNMIX    (AV_CH_STEREO_LEFT|AV_CH_STEREO_RIGHT)\n\nenum AVMatrixEncoding {\n    AV_MATRIX_ENCODING_NONE,\n    AV_MATRIX_ENCODING_DOLBY,\n    AV_MATRIX_ENCODING_DPLII,\n    AV_MATRIX_ENCODING_DPLIIX,\n    AV_MATRIX_ENCODING_DPLIIZ,\n    AV_MATRIX_ENCODING_DOLBYEX,\n    AV_MATRIX_ENCODING_DOLBYHEADPHONE,\n    AV_MATRIX_ENCODING_NB\n};\n\n/**\n * Return a channel layout id that matches name, or 0 if no match is found.\n *\n * name can be one or several of the following notations,\n * separated by '+' or '|':\n * - the name of an usual channel layout (mono, stereo, 4.0, quad, 5.0,\n *   5.0(side), 5.1, 5.1(side), 7.1, 7.1(wide), downmix);\n * - the name of a single channel (FL, FR, FC, LFE, BL, BR, FLC, FRC, BC,\n *   SL, SR, TC, TFL, TFC, TFR, TBL, TBC, TBR, DL, DR);\n * - a number of channels, in decimal, followed by 'c', yielding\n *   the default channel layout for that number of channels (@see\n *   av_get_default_channel_layout);\n * - a channel layout mask, in hexadecimal starting with \"0x\" (see the\n *   AV_CH_* macros).\n *\n * Example: \"stereo+FC\" = \"2c+FC\" = \"2c+1c\" = \"0x7\"\n */\nuint64_t av_get_channel_layout(const char *name);\n\n/**\n * Return a channel layout and the number of channels based on the specified name.\n *\n * This function is similar to (@see av_get_channel_layout), but can also parse\n * unknown channel layout specifications.\n *\n * @param[in]  name             channel layout specification string\n * @param[out] channel_layout   parsed channel layout (0 if unknown)\n * @param[out] nb_channels      number of channels\n *\n * @return 0 on success, AVERROR(EINVAL) if the parsing fails.\n */\nint av_get_extended_channel_layout(const char *name, uint64_t* channel_layout, int* nb_channels);\n\n/**\n * Return a description of a channel layout.\n * If nb_channels is <= 0, it is guessed from the channel_layout.\n *\n * @param buf put here the string containing the channel layout\n * @param buf_size size in bytes of the buffer\n */\nvoid av_get_channel_layout_string(char *buf, int buf_size, int nb_channels, uint64_t channel_layout);\n\nstruct AVBPrint;\n/**\n * Append a description of a channel layout to a bprint buffer.\n */\nvoid av_bprint_channel_layout(struct AVBPrint *bp, int nb_channels, uint64_t channel_layout);\n\n/**\n * Return the number of channels in the channel layout.\n */\nint av_get_channel_layout_nb_channels(uint64_t channel_layout);\n\n/**\n * Return default channel layout for a given number of channels.\n */\nint64_t av_get_default_channel_layout(int nb_channels);\n\n/**\n * Get the index of a channel in channel_layout.\n *\n * @param channel a channel layout describing exactly one channel which must be\n *                present in channel_layout.\n *\n * @return index of channel in channel_layout on success, a negative AVERROR\n *         on error.\n */\nint av_get_channel_layout_channel_index(uint64_t channel_layout,\n                                        uint64_t channel);\n\n/**\n * Get the channel with the given index in channel_layout.\n */\nuint64_t av_channel_layout_extract_channel(uint64_t channel_layout, int index);\n\n/**\n * Get the name of a given channel.\n *\n * @return channel name on success, NULL on error.\n */\nconst char *av_get_channel_name(uint64_t channel);\n\n/**\n * Get the description of a given channel.\n *\n * @param channel  a channel layout with a single channel\n * @return  channel description on success, NULL on error\n */\nconst char *av_get_channel_description(uint64_t channel);\n\n/**\n * Get the value and name of a standard channel layout.\n *\n * @param[in]  index   index in an internal list, starting at 0\n * @param[out] layout  channel layout mask\n * @param[out] name    name of the layout\n * @return  0  if the layout exists,\n *          <0 if index is beyond the limits\n */\nint av_get_standard_channel_layout(unsigned index, uint64_t *layout,\n                                   const char **name);\n\n/**\n * @}\n * @}\n */\n\n#endif /* AVUTIL_CHANNEL_LAYOUT_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/common.h",
    "content": "/*\n * copyright (c) 2006 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * common internal and external API header\n */\n\n#ifndef AVUTIL_COMMON_H\n#define AVUTIL_COMMON_H\n\n#if defined(__cplusplus) && !defined(__STDC_CONSTANT_MACROS) && !defined(UINT64_C)\n#error missing -D__STDC_CONSTANT_MACROS / #define __STDC_CONSTANT_MACROS\n#endif\n\n#include <errno.h>\n#include <inttypes.h>\n#include <limits.h>\n#include <math.h>\n#include <stdint.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#include \"attributes.h\"\n#include \"macros.h\"\n#include \"version.h\"\n#include \"libavutil/avconfig.h\"\n\n#if AV_HAVE_BIGENDIAN\n#   define AV_NE(be, le) (be)\n#else\n#   define AV_NE(be, le) (le)\n#endif\n\n//rounded division & shift\n#define RSHIFT(a,b) ((a) > 0 ? ((a) + ((1<<(b))>>1))>>(b) : ((a) + ((1<<(b))>>1)-1)>>(b))\n/* assume b>0 */\n#define ROUNDED_DIV(a,b) (((a)>0 ? (a) + ((b)>>1) : (a) - ((b)>>1))/(b))\n/* Fast a/(1<<b) rounded toward +inf. Assume a>=0 and b>=0 */\n#define AV_CEIL_RSHIFT(a,b) (!av_builtin_constant_p(b) ? -((-(a)) >> (b)) \\\n                                                       : ((a) + (1<<(b)) - 1) >> (b))\n/* Backwards compat. */\n#define FF_CEIL_RSHIFT AV_CEIL_RSHIFT\n\n#define FFUDIV(a,b) (((a)>0 ?(a):(a)-(b)+1) / (b))\n#define FFUMOD(a,b) ((a)-(b)*FFUDIV(a,b))\n\n/**\n * Absolute value, Note, INT_MIN / INT64_MIN result in undefined behavior as they\n * are not representable as absolute values of their type. This is the same\n * as with *abs()\n * @see FFNABS()\n */\n#define FFABS(a) ((a) >= 0 ? (a) : (-(a)))\n#define FFSIGN(a) ((a) > 0 ? 1 : -1)\n\n/**\n * Negative Absolute value.\n * this works for all integers of all types.\n * As with many macros, this evaluates its argument twice, it thus must not have\n * a sideeffect, that is FFNABS(x++) has undefined behavior.\n */\n#define FFNABS(a) ((a) <= 0 ? (a) : (-(a)))\n\n/**\n * Comparator.\n * For two numerical expressions x and y, gives 1 if x > y, -1 if x < y, and 0\n * if x == y. This is useful for instance in a qsort comparator callback.\n * Furthermore, compilers are able to optimize this to branchless code, and\n * there is no risk of overflow with signed types.\n * As with many macros, this evaluates its argument multiple times, it thus\n * must not have a side-effect.\n */\n#define FFDIFFSIGN(x,y) (((x)>(y)) - ((x)<(y)))\n\n#define FFMAX(a,b) ((a) > (b) ? (a) : (b))\n#define FFMAX3(a,b,c) FFMAX(FFMAX(a,b),c)\n#define FFMIN(a,b) ((a) > (b) ? (b) : (a))\n#define FFMIN3(a,b,c) FFMIN(FFMIN(a,b),c)\n\n#define FFSWAP(type,a,b) do{type SWAP_tmp= b; b= a; a= SWAP_tmp;}while(0)\n#define FF_ARRAY_ELEMS(a) (sizeof(a) / sizeof((a)[0]))\n\n/* misc math functions */\n\n#ifdef HAVE_AV_CONFIG_H\n#   include \"config.h\"\n#   include \"intmath.h\"\n#endif\n\n/* Pull in unguarded fallback defines at the end of this file. */\n#include \"common.h\"\n\n#ifndef av_log2\nav_const int av_log2(unsigned v);\n#endif\n\n#ifndef av_log2_16bit\nav_const int av_log2_16bit(unsigned v);\n#endif\n\n/**\n * Clip a signed integer value into the amin-amax range.\n * @param a value to clip\n * @param amin minimum value of the clip range\n * @param amax maximum value of the clip range\n * @return clipped value\n */\nstatic av_always_inline av_const int av_clip_c(int a, int amin, int amax)\n{\n#if defined(HAVE_AV_CONFIG_H) && defined(ASSERT_LEVEL) && ASSERT_LEVEL >= 2\n    if (amin > amax) abort();\n#endif\n    if      (a < amin) return amin;\n    else if (a > amax) return amax;\n    else               return a;\n}\n\n/**\n * Clip a signed 64bit integer value into the amin-amax range.\n * @param a value to clip\n * @param amin minimum value of the clip range\n * @param amax maximum value of the clip range\n * @return clipped value\n */\nstatic av_always_inline av_const int64_t av_clip64_c(int64_t a, int64_t amin, int64_t amax)\n{\n#if defined(HAVE_AV_CONFIG_H) && defined(ASSERT_LEVEL) && ASSERT_LEVEL >= 2\n    if (amin > amax) abort();\n#endif\n    if      (a < amin) return amin;\n    else if (a > amax) return amax;\n    else               return a;\n}\n\n/**\n * Clip a signed integer value into the 0-255 range.\n * @param a value to clip\n * @return clipped value\n */\nstatic av_always_inline av_const uint8_t av_clip_uint8_c(int a)\n{\n    if (a&(~0xFF)) return (~a)>>31;\n    else           return a;\n}\n\n/**\n * Clip a signed integer value into the -128,127 range.\n * @param a value to clip\n * @return clipped value\n */\nstatic av_always_inline av_const int8_t av_clip_int8_c(int a)\n{\n    if ((a+0x80U) & ~0xFF) return (a>>31) ^ 0x7F;\n    else                  return a;\n}\n\n/**\n * Clip a signed integer value into the 0-65535 range.\n * @param a value to clip\n * @return clipped value\n */\nstatic av_always_inline av_const uint16_t av_clip_uint16_c(int a)\n{\n    if (a&(~0xFFFF)) return (~a)>>31;\n    else             return a;\n}\n\n/**\n * Clip a signed integer value into the -32768,32767 range.\n * @param a value to clip\n * @return clipped value\n */\nstatic av_always_inline av_const int16_t av_clip_int16_c(int a)\n{\n    if ((a+0x8000U) & ~0xFFFF) return (a>>31) ^ 0x7FFF;\n    else                      return a;\n}\n\n/**\n * Clip a signed 64-bit integer value into the -2147483648,2147483647 range.\n * @param a value to clip\n * @return clipped value\n */\nstatic av_always_inline av_const int32_t av_clipl_int32_c(int64_t a)\n{\n    if ((a+0x80000000u) & ~UINT64_C(0xFFFFFFFF)) return (int32_t)((a>>63) ^ 0x7FFFFFFF);\n    else                                         return (int32_t)a;\n}\n\n/**\n * Clip a signed integer into the -(2^p),(2^p-1) range.\n * @param  a value to clip\n * @param  p bit position to clip at\n * @return clipped value\n */\nstatic av_always_inline av_const int av_clip_intp2_c(int a, int p)\n{\n    if (((unsigned)a + (1 << p)) & ~((2 << p) - 1))\n        return (a >> 31) ^ ((1 << p) - 1);\n    else\n        return a;\n}\n\n/**\n * Clip a signed integer to an unsigned power of two range.\n * @param  a value to clip\n * @param  p bit position to clip at\n * @return clipped value\n */\nstatic av_always_inline av_const unsigned av_clip_uintp2_c(int a, int p)\n{\n    if (a & ~((1<<p) - 1)) return (~a) >> 31 & ((1<<p) - 1);\n    else                   return  a;\n}\n\n/**\n * Clear high bits from an unsigned integer starting with specific bit position\n * @param  a value to clip\n * @param  p bit position to clip at\n * @return clipped value\n */\nstatic av_always_inline av_const unsigned av_mod_uintp2_c(unsigned a, unsigned p)\n{\n    return a & ((1 << p) - 1);\n}\n\n/**\n * Add two signed 32-bit values with saturation.\n *\n * @param  a one value\n * @param  b another value\n * @return sum with signed saturation\n */\nstatic av_always_inline int av_sat_add32_c(int a, int b)\n{\n    return av_clipl_int32((int64_t)a + b);\n}\n\n/**\n * Add a doubled value to another value with saturation at both stages.\n *\n * @param  a first value\n * @param  b value doubled and added to a\n * @return sum sat(a + sat(2*b)) with signed saturation\n */\nstatic av_always_inline int av_sat_dadd32_c(int a, int b)\n{\n    return av_sat_add32(a, av_sat_add32(b, b));\n}\n\n/**\n * Subtract two signed 32-bit values with saturation.\n *\n * @param  a one value\n * @param  b another value\n * @return difference with signed saturation\n */\nstatic av_always_inline int av_sat_sub32_c(int a, int b)\n{\n    return av_clipl_int32((int64_t)a - b);\n}\n\n/**\n * Subtract a doubled value from another value with saturation at both stages.\n *\n * @param  a first value\n * @param  b value doubled and subtracted from a\n * @return difference sat(a - sat(2*b)) with signed saturation\n */\nstatic av_always_inline int av_sat_dsub32_c(int a, int b)\n{\n    return av_sat_sub32(a, av_sat_add32(b, b));\n}\n\n/**\n * Clip a float value into the amin-amax range.\n * @param a value to clip\n * @param amin minimum value of the clip range\n * @param amax maximum value of the clip range\n * @return clipped value\n */\nstatic av_always_inline av_const float av_clipf_c(float a, float amin, float amax)\n{\n#if defined(HAVE_AV_CONFIG_H) && defined(ASSERT_LEVEL) && ASSERT_LEVEL >= 2\n    if (amin > amax) abort();\n#endif\n    if      (a < amin) return amin;\n    else if (a > amax) return amax;\n    else               return a;\n}\n\n/**\n * Clip a double value into the amin-amax range.\n * @param a value to clip\n * @param amin minimum value of the clip range\n * @param amax maximum value of the clip range\n * @return clipped value\n */\nstatic av_always_inline av_const double av_clipd_c(double a, double amin, double amax)\n{\n#if defined(HAVE_AV_CONFIG_H) && defined(ASSERT_LEVEL) && ASSERT_LEVEL >= 2\n    if (amin > amax) abort();\n#endif\n    if      (a < amin) return amin;\n    else if (a > amax) return amax;\n    else               return a;\n}\n\n/** Compute ceil(log2(x)).\n * @param x value used to compute ceil(log2(x))\n * @return computed ceiling of log2(x)\n */\nstatic av_always_inline av_const int av_ceil_log2_c(int x)\n{\n    return av_log2((x - 1) << 1);\n}\n\n/**\n * Count number of bits set to one in x\n * @param x value to count bits of\n * @return the number of bits set to one in x\n */\nstatic av_always_inline av_const int av_popcount_c(uint32_t x)\n{\n    x -= (x >> 1) & 0x55555555;\n    x = (x & 0x33333333) + ((x >> 2) & 0x33333333);\n    x = (x + (x >> 4)) & 0x0F0F0F0F;\n    x += x >> 8;\n    return (x + (x >> 16)) & 0x3F;\n}\n\n/**\n * Count number of bits set to one in x\n * @param x value to count bits of\n * @return the number of bits set to one in x\n */\nstatic av_always_inline av_const int av_popcount64_c(uint64_t x)\n{\n    return av_popcount((uint32_t)x) + av_popcount((uint32_t)(x >> 32));\n}\n\nstatic av_always_inline av_const int av_parity_c(uint32_t v)\n{\n    return av_popcount(v) & 1;\n}\n\n#define MKTAG(a,b,c,d) ((a) | ((b) << 8) | ((c) << 16) | ((unsigned)(d) << 24))\n#define MKBETAG(a,b,c,d) ((d) | ((c) << 8) | ((b) << 16) | ((unsigned)(a) << 24))\n\n/**\n * Convert a UTF-8 character (up to 4 bytes) to its 32-bit UCS-4 encoded form.\n *\n * @param val      Output value, must be an lvalue of type uint32_t.\n * @param GET_BYTE Expression reading one byte from the input.\n *                 Evaluated up to 7 times (4 for the currently\n *                 assigned Unicode range).  With a memory buffer\n *                 input, this could be *ptr++.\n * @param ERROR    Expression to be evaluated on invalid input,\n *                 typically a goto statement.\n *\n * @warning ERROR should not contain a loop control statement which\n * could interact with the internal while loop, and should force an\n * exit from the macro code (e.g. through a goto or a return) in order\n * to prevent undefined results.\n */\n#define GET_UTF8(val, GET_BYTE, ERROR)\\\n    val= (GET_BYTE);\\\n    {\\\n        uint32_t top = (val & 128) >> 1;\\\n        if ((val & 0xc0) == 0x80 || val >= 0xFE)\\\n            ERROR\\\n        while (val & top) {\\\n            int tmp= (GET_BYTE) - 128;\\\n            if(tmp>>6)\\\n                ERROR\\\n            val= (val<<6) + tmp;\\\n            top <<= 5;\\\n        }\\\n        val &= (top << 1) - 1;\\\n    }\n\n/**\n * Convert a UTF-16 character (2 or 4 bytes) to its 32-bit UCS-4 encoded form.\n *\n * @param val       Output value, must be an lvalue of type uint32_t.\n * @param GET_16BIT Expression returning two bytes of UTF-16 data converted\n *                  to native byte order.  Evaluated one or two times.\n * @param ERROR     Expression to be evaluated on invalid input,\n *                  typically a goto statement.\n */\n#define GET_UTF16(val, GET_16BIT, ERROR)\\\n    val = GET_16BIT;\\\n    {\\\n        unsigned int hi = val - 0xD800;\\\n        if (hi < 0x800) {\\\n            val = GET_16BIT - 0xDC00;\\\n            if (val > 0x3FFU || hi > 0x3FFU)\\\n                ERROR\\\n            val += (hi<<10) + 0x10000;\\\n        }\\\n    }\\\n\n/**\n * @def PUT_UTF8(val, tmp, PUT_BYTE)\n * Convert a 32-bit Unicode character to its UTF-8 encoded form (up to 4 bytes long).\n * @param val is an input-only argument and should be of type uint32_t. It holds\n * a UCS-4 encoded Unicode character that is to be converted to UTF-8. If\n * val is given as a function it is executed only once.\n * @param tmp is a temporary variable and should be of type uint8_t. It\n * represents an intermediate value during conversion that is to be\n * output by PUT_BYTE.\n * @param PUT_BYTE writes the converted UTF-8 bytes to any proper destination.\n * It could be a function or a statement, and uses tmp as the input byte.\n * For example, PUT_BYTE could be \"*output++ = tmp;\" PUT_BYTE will be\n * executed up to 4 times for values in the valid UTF-8 range and up to\n * 7 times in the general case, depending on the length of the converted\n * Unicode character.\n */\n#define PUT_UTF8(val, tmp, PUT_BYTE)\\\n    {\\\n        int bytes, shift;\\\n        uint32_t in = val;\\\n        if (in < 0x80) {\\\n            tmp = in;\\\n            PUT_BYTE\\\n        } else {\\\n            bytes = (av_log2(in) + 4) / 5;\\\n            shift = (bytes - 1) * 6;\\\n            tmp = (256 - (256 >> bytes)) | (in >> shift);\\\n            PUT_BYTE\\\n            while (shift >= 6) {\\\n                shift -= 6;\\\n                tmp = 0x80 | ((in >> shift) & 0x3f);\\\n                PUT_BYTE\\\n            }\\\n        }\\\n    }\n\n/**\n * @def PUT_UTF16(val, tmp, PUT_16BIT)\n * Convert a 32-bit Unicode character to its UTF-16 encoded form (2 or 4 bytes).\n * @param val is an input-only argument and should be of type uint32_t. It holds\n * a UCS-4 encoded Unicode character that is to be converted to UTF-16. If\n * val is given as a function it is executed only once.\n * @param tmp is a temporary variable and should be of type uint16_t. It\n * represents an intermediate value during conversion that is to be\n * output by PUT_16BIT.\n * @param PUT_16BIT writes the converted UTF-16 data to any proper destination\n * in desired endianness. It could be a function or a statement, and uses tmp\n * as the input byte.  For example, PUT_BYTE could be \"*output++ = tmp;\"\n * PUT_BYTE will be executed 1 or 2 times depending on input character.\n */\n#define PUT_UTF16(val, tmp, PUT_16BIT)\\\n    {\\\n        uint32_t in = val;\\\n        if (in < 0x10000) {\\\n            tmp = in;\\\n            PUT_16BIT\\\n        } else {\\\n            tmp = 0xD800 | ((in - 0x10000) >> 10);\\\n            PUT_16BIT\\\n            tmp = 0xDC00 | ((in - 0x10000) & 0x3FF);\\\n            PUT_16BIT\\\n        }\\\n    }\\\n\n\n\n#include \"mem.h\"\n\n#ifdef HAVE_AV_CONFIG_H\n#    include \"internal.h\"\n#endif /* HAVE_AV_CONFIG_H */\n\n#endif /* AVUTIL_COMMON_H */\n\n/*\n * The following definitions are outside the multiple inclusion guard\n * to ensure they are immediately available in intmath.h.\n */\n\n#ifndef av_ceil_log2\n#   define av_ceil_log2     av_ceil_log2_c\n#endif\n#ifndef av_clip\n#   define av_clip          av_clip_c\n#endif\n#ifndef av_clip64\n#   define av_clip64        av_clip64_c\n#endif\n#ifndef av_clip_uint8\n#   define av_clip_uint8    av_clip_uint8_c\n#endif\n#ifndef av_clip_int8\n#   define av_clip_int8     av_clip_int8_c\n#endif\n#ifndef av_clip_uint16\n#   define av_clip_uint16   av_clip_uint16_c\n#endif\n#ifndef av_clip_int16\n#   define av_clip_int16    av_clip_int16_c\n#endif\n#ifndef av_clipl_int32\n#   define av_clipl_int32   av_clipl_int32_c\n#endif\n#ifndef av_clip_intp2\n#   define av_clip_intp2    av_clip_intp2_c\n#endif\n#ifndef av_clip_uintp2\n#   define av_clip_uintp2   av_clip_uintp2_c\n#endif\n#ifndef av_mod_uintp2\n#   define av_mod_uintp2    av_mod_uintp2_c\n#endif\n#ifndef av_sat_add32\n#   define av_sat_add32     av_sat_add32_c\n#endif\n#ifndef av_sat_dadd32\n#   define av_sat_dadd32    av_sat_dadd32_c\n#endif\n#ifndef av_sat_sub32\n#   define av_sat_sub32     av_sat_sub32_c\n#endif\n#ifndef av_sat_dsub32\n#   define av_sat_dsub32    av_sat_dsub32_c\n#endif\n#ifndef av_clipf\n#   define av_clipf         av_clipf_c\n#endif\n#ifndef av_clipd\n#   define av_clipd         av_clipd_c\n#endif\n#ifndef av_popcount\n#   define av_popcount      av_popcount_c\n#endif\n#ifndef av_popcount64\n#   define av_popcount64    av_popcount64_c\n#endif\n#ifndef av_parity\n#   define av_parity        av_parity_c\n#endif\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/cpu.h",
    "content": "/*\n * Copyright (c) 2000, 2001, 2002 Fabrice Bellard\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_CPU_H\n#define AVUTIL_CPU_H\n\n#include <stddef.h>\n\n#include \"attributes.h\"\n\n#define AV_CPU_FLAG_FORCE    0x80000000 /* force usage of selected flags (OR) */\n\n    /* lower 16 bits - CPU features */\n#define AV_CPU_FLAG_MMX          0x0001 ///< standard MMX\n#define AV_CPU_FLAG_MMXEXT       0x0002 ///< SSE integer functions or AMD MMX ext\n#define AV_CPU_FLAG_MMX2         0x0002 ///< SSE integer functions or AMD MMX ext\n#define AV_CPU_FLAG_3DNOW        0x0004 ///< AMD 3DNOW\n#define AV_CPU_FLAG_SSE          0x0008 ///< SSE functions\n#define AV_CPU_FLAG_SSE2         0x0010 ///< PIV SSE2 functions\n#define AV_CPU_FLAG_SSE2SLOW 0x40000000 ///< SSE2 supported, but usually not faster\n                                        ///< than regular MMX/SSE (e.g. Core1)\n#define AV_CPU_FLAG_3DNOWEXT     0x0020 ///< AMD 3DNowExt\n#define AV_CPU_FLAG_SSE3         0x0040 ///< Prescott SSE3 functions\n#define AV_CPU_FLAG_SSE3SLOW 0x20000000 ///< SSE3 supported, but usually not faster\n                                        ///< than regular MMX/SSE (e.g. Core1)\n#define AV_CPU_FLAG_SSSE3        0x0080 ///< Conroe SSSE3 functions\n#define AV_CPU_FLAG_SSSE3SLOW 0x4000000 ///< SSSE3 supported, but usually not faster\n#define AV_CPU_FLAG_ATOM     0x10000000 ///< Atom processor, some SSSE3 instructions are slower\n#define AV_CPU_FLAG_SSE4         0x0100 ///< Penryn SSE4.1 functions\n#define AV_CPU_FLAG_SSE42        0x0200 ///< Nehalem SSE4.2 functions\n#define AV_CPU_FLAG_AESNI       0x80000 ///< Advanced Encryption Standard functions\n#define AV_CPU_FLAG_AVX          0x4000 ///< AVX functions: requires OS support even if YMM registers aren't used\n#define AV_CPU_FLAG_AVXSLOW   0x8000000 ///< AVX supported, but slow when using YMM registers (e.g. Bulldozer)\n#define AV_CPU_FLAG_XOP          0x0400 ///< Bulldozer XOP functions\n#define AV_CPU_FLAG_FMA4         0x0800 ///< Bulldozer FMA4 functions\n#define AV_CPU_FLAG_CMOV         0x1000 ///< supports cmov instruction\n#define AV_CPU_FLAG_AVX2         0x8000 ///< AVX2 functions: requires OS support even if YMM registers aren't used\n#define AV_CPU_FLAG_FMA3        0x10000 ///< Haswell FMA3 functions\n#define AV_CPU_FLAG_BMI1        0x20000 ///< Bit Manipulation Instruction Set 1\n#define AV_CPU_FLAG_BMI2        0x40000 ///< Bit Manipulation Instruction Set 2\n#define AV_CPU_FLAG_AVX512     0x100000 ///< AVX-512 functions: requires OS support even if YMM/ZMM registers aren't used\n\n#define AV_CPU_FLAG_ALTIVEC      0x0001 ///< standard\n#define AV_CPU_FLAG_VSX          0x0002 ///< ISA 2.06\n#define AV_CPU_FLAG_POWER8       0x0004 ///< ISA 2.07\n\n#define AV_CPU_FLAG_ARMV5TE      (1 << 0)\n#define AV_CPU_FLAG_ARMV6        (1 << 1)\n#define AV_CPU_FLAG_ARMV6T2      (1 << 2)\n#define AV_CPU_FLAG_VFP          (1 << 3)\n#define AV_CPU_FLAG_VFPV3        (1 << 4)\n#define AV_CPU_FLAG_NEON         (1 << 5)\n#define AV_CPU_FLAG_ARMV8        (1 << 6)\n#define AV_CPU_FLAG_VFP_VM       (1 << 7) ///< VFPv2 vector mode, deprecated in ARMv7-A and unavailable in various CPUs implementations\n#define AV_CPU_FLAG_SETEND       (1 <<16)\n\n/**\n * Return the flags which specify extensions supported by the CPU.\n * The returned value is affected by av_force_cpu_flags() if that was used\n * before. So av_get_cpu_flags() can easily be used in an application to\n * detect the enabled cpu flags.\n */\nint av_get_cpu_flags(void);\n\n/**\n * Disables cpu detection and forces the specified flags.\n * -1 is a special case that disables forcing of specific flags.\n */\nvoid av_force_cpu_flags(int flags);\n\n/**\n * Set a mask on flags returned by av_get_cpu_flags().\n * This function is mainly useful for testing.\n * Please use av_force_cpu_flags() and av_get_cpu_flags() instead which are more flexible\n */\nattribute_deprecated void av_set_cpu_flags_mask(int mask);\n\n/**\n * Parse CPU flags from a string.\n *\n * The returned flags contain the specified flags as well as related unspecified flags.\n *\n * This function exists only for compatibility with libav.\n * Please use av_parse_cpu_caps() when possible.\n * @return a combination of AV_CPU_* flags, negative on error.\n */\nattribute_deprecated\nint av_parse_cpu_flags(const char *s);\n\n/**\n * Parse CPU caps from a string and update the given AV_CPU_* flags based on that.\n *\n * @return negative on error.\n */\nint av_parse_cpu_caps(unsigned *flags, const char *s);\n\n/**\n * @return the number of logical CPU cores present.\n */\nint av_cpu_count(void);\n\n/**\n * Get the maximum data alignment that may be required by FFmpeg.\n *\n * Note that this is affected by the build configuration and the CPU flags mask,\n * so e.g. if the CPU supports AVX, but libavutil has been built with\n * --disable-avx or the AV_CPU_FLAG_AVX flag has been disabled through\n *  av_set_cpu_flags_mask(), then this function will behave as if AVX is not\n *  present.\n */\nsize_t av_cpu_max_align(void);\n\n#endif /* AVUTIL_CPU_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/crc.h",
    "content": "/*\n * copyright (c) 2006 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * @ingroup lavu_crc32\n * Public header for CRC hash function implementation.\n */\n\n#ifndef AVUTIL_CRC_H\n#define AVUTIL_CRC_H\n\n#include <stdint.h>\n#include <stddef.h>\n#include \"attributes.h\"\n#include \"version.h\"\n\n/**\n * @defgroup lavu_crc32 CRC\n * @ingroup lavu_hash\n * CRC (Cyclic Redundancy Check) hash function implementation.\n *\n * This module supports numerous CRC polynomials, in addition to the most\n * widely used CRC-32-IEEE. See @ref AVCRCId for a list of available\n * polynomials.\n *\n * @{\n */\n\ntypedef uint32_t AVCRC;\n\ntypedef enum {\n    AV_CRC_8_ATM,\n    AV_CRC_16_ANSI,\n    AV_CRC_16_CCITT,\n    AV_CRC_32_IEEE,\n    AV_CRC_32_IEEE_LE,  /*< reversed bitorder version of AV_CRC_32_IEEE */\n    AV_CRC_16_ANSI_LE,  /*< reversed bitorder version of AV_CRC_16_ANSI */\n    AV_CRC_24_IEEE,\n    AV_CRC_8_EBU,\n    AV_CRC_MAX,         /*< Not part of public API! Do not use outside libavutil. */\n}AVCRCId;\n\n/**\n * Initialize a CRC table.\n * @param ctx must be an array of size sizeof(AVCRC)*257 or sizeof(AVCRC)*1024\n * @param le If 1, the lowest bit represents the coefficient for the highest\n *           exponent of the corresponding polynomial (both for poly and\n *           actual CRC).\n *           If 0, you must swap the CRC parameter and the result of av_crc\n *           if you need the standard representation (can be simplified in\n *           most cases to e.g. bswap16):\n *           av_bswap32(crc << (32-bits))\n * @param bits number of bits for the CRC\n * @param poly generator polynomial without the x**bits coefficient, in the\n *             representation as specified by le\n * @param ctx_size size of ctx in bytes\n * @return <0 on failure\n */\nint av_crc_init(AVCRC *ctx, int le, int bits, uint32_t poly, int ctx_size);\n\n/**\n * Get an initialized standard CRC table.\n * @param crc_id ID of a standard CRC\n * @return a pointer to the CRC table or NULL on failure\n */\nconst AVCRC *av_crc_get_table(AVCRCId crc_id);\n\n/**\n * Calculate the CRC of a block.\n * @param crc CRC of previous blocks if any or initial value for CRC\n * @return CRC updated with the data from the given block\n *\n * @see av_crc_init() \"le\" parameter\n */\nuint32_t av_crc(const AVCRC *ctx, uint32_t crc,\n                const uint8_t *buffer, size_t length) av_pure;\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_CRC_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/des.h",
    "content": "/*\n * DES encryption/decryption\n * Copyright (c) 2007 Reimar Doeffinger\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_DES_H\n#define AVUTIL_DES_H\n\n#include <stdint.h>\n\n/**\n * @defgroup lavu_des DES\n * @ingroup lavu_crypto\n * @{\n */\n\ntypedef struct AVDES {\n    uint64_t round_keys[3][16];\n    int triple_des;\n} AVDES;\n\n/**\n * Allocate an AVDES context.\n */\nAVDES *av_des_alloc(void);\n\n/**\n * @brief Initializes an AVDES context.\n *\n * @param key_bits must be 64 or 192\n * @param decrypt 0 for encryption/CBC-MAC, 1 for decryption\n * @return zero on success, negative value otherwise\n */\nint av_des_init(struct AVDES *d, const uint8_t *key, int key_bits, int decrypt);\n\n/**\n * @brief Encrypts / decrypts using the DES algorithm.\n *\n * @param count number of 8 byte blocks\n * @param dst destination array, can be equal to src, must be 8-byte aligned\n * @param src source array, can be equal to dst, must be 8-byte aligned, may be NULL\n * @param iv initialization vector for CBC mode, if NULL then ECB will be used,\n *           must be 8-byte aligned\n * @param decrypt 0 for encryption, 1 for decryption\n */\nvoid av_des_crypt(struct AVDES *d, uint8_t *dst, const uint8_t *src, int count, uint8_t *iv, int decrypt);\n\n/**\n * @brief Calculates CBC-MAC using the DES algorithm.\n *\n * @param count number of 8 byte blocks\n * @param dst destination array, can be equal to src, must be 8-byte aligned\n * @param src source array, can be equal to dst, must be 8-byte aligned, may be NULL\n */\nvoid av_des_mac(struct AVDES *d, uint8_t *dst, const uint8_t *src, int count);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_DES_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/dict.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * Public dictionary API.\n * @deprecated\n *  AVDictionary is provided for compatibility with libav. It is both in\n *  implementation as well as API inefficient. It does not scale and is\n *  extremely slow with large dictionaries.\n *  It is recommended that new code uses our tree container from tree.c/h\n *  where applicable, which uses AVL trees to achieve O(log n) performance.\n */\n\n#ifndef AVUTIL_DICT_H\n#define AVUTIL_DICT_H\n\n#include <stdint.h>\n\n#include \"version.h\"\n\n/**\n * @addtogroup lavu_dict AVDictionary\n * @ingroup lavu_data\n *\n * @brief Simple key:value store\n *\n * @{\n * Dictionaries are used for storing key:value pairs. To create\n * an AVDictionary, simply pass an address of a NULL pointer to\n * av_dict_set(). NULL can be used as an empty dictionary wherever\n * a pointer to an AVDictionary is required.\n * Use av_dict_get() to retrieve an entry or iterate over all\n * entries and finally av_dict_free() to free the dictionary\n * and all its contents.\n *\n @code\n   AVDictionary *d = NULL;           // \"create\" an empty dictionary\n   AVDictionaryEntry *t = NULL;\n\n   av_dict_set(&d, \"foo\", \"bar\", 0); // add an entry\n\n   char *k = av_strdup(\"key\");       // if your strings are already allocated,\n   char *v = av_strdup(\"value\");     // you can avoid copying them like this\n   av_dict_set(&d, k, v, AV_DICT_DONT_STRDUP_KEY | AV_DICT_DONT_STRDUP_VAL);\n\n   while (t = av_dict_get(d, \"\", t, AV_DICT_IGNORE_SUFFIX)) {\n       <....>                             // iterate over all entries in d\n   }\n   av_dict_free(&d);\n @endcode\n */\n\n#define AV_DICT_MATCH_CASE      1   /**< Only get an entry with exact-case key match. Only relevant in av_dict_get(). */\n#define AV_DICT_IGNORE_SUFFIX   2   /**< Return first entry in a dictionary whose first part corresponds to the search key,\n                                         ignoring the suffix of the found key string. Only relevant in av_dict_get(). */\n#define AV_DICT_DONT_STRDUP_KEY 4   /**< Take ownership of a key that's been\n                                         allocated with av_malloc() or another memory allocation function. */\n#define AV_DICT_DONT_STRDUP_VAL 8   /**< Take ownership of a value that's been\n                                         allocated with av_malloc() or another memory allocation function. */\n#define AV_DICT_DONT_OVERWRITE 16   ///< Don't overwrite existing entries.\n#define AV_DICT_APPEND         32   /**< If the entry already exists, append to it.  Note that no\n                                      delimiter is added, the strings are simply concatenated. */\n#define AV_DICT_MULTIKEY       64   /**< Allow to store several equal keys in the dictionary */\n\ntypedef struct AVDictionaryEntry {\n    char *key;\n    char *value;\n} AVDictionaryEntry;\n\ntypedef struct AVDictionary AVDictionary;\n\n/**\n * Get a dictionary entry with matching key.\n *\n * The returned entry key or value must not be changed, or it will\n * cause undefined behavior.\n *\n * To iterate through all the dictionary entries, you can set the matching key\n * to the null string \"\" and set the AV_DICT_IGNORE_SUFFIX flag.\n *\n * @param prev Set to the previous matching element to find the next.\n *             If set to NULL the first matching element is returned.\n * @param key matching key\n * @param flags a collection of AV_DICT_* flags controlling how the entry is retrieved\n * @return found entry or NULL in case no matching entry was found in the dictionary\n */\nAVDictionaryEntry *av_dict_get(const AVDictionary *m, const char *key,\n                               const AVDictionaryEntry *prev, int flags);\n\n/**\n * Get number of entries in dictionary.\n *\n * @param m dictionary\n * @return  number of entries in dictionary\n */\nint av_dict_count(const AVDictionary *m);\n\n/**\n * Set the given entry in *pm, overwriting an existing entry.\n *\n * Note: If AV_DICT_DONT_STRDUP_KEY or AV_DICT_DONT_STRDUP_VAL is set,\n * these arguments will be freed on error.\n *\n * Warning: Adding a new entry to a dictionary invalidates all existing entries\n * previously returned with av_dict_get.\n *\n * @param pm pointer to a pointer to a dictionary struct. If *pm is NULL\n * a dictionary struct is allocated and put in *pm.\n * @param key entry key to add to *pm (will either be av_strduped or added as a new key depending on flags)\n * @param value entry value to add to *pm (will be av_strduped or added as a new key depending on flags).\n *        Passing a NULL value will cause an existing entry to be deleted.\n * @return >= 0 on success otherwise an error code <0\n */\nint av_dict_set(AVDictionary **pm, const char *key, const char *value, int flags);\n\n/**\n * Convenience wrapper for av_dict_set that converts the value to a string\n * and stores it.\n *\n * Note: If AV_DICT_DONT_STRDUP_KEY is set, key will be freed on error.\n */\nint av_dict_set_int(AVDictionary **pm, const char *key, int64_t value, int flags);\n\n/**\n * Parse the key/value pairs list and add the parsed entries to a dictionary.\n *\n * In case of failure, all the successfully set entries are stored in\n * *pm. You may need to manually free the created dictionary.\n *\n * @param key_val_sep  a 0-terminated list of characters used to separate\n *                     key from value\n * @param pairs_sep    a 0-terminated list of characters used to separate\n *                     two pairs from each other\n * @param flags        flags to use when adding to dictionary.\n *                     AV_DICT_DONT_STRDUP_KEY and AV_DICT_DONT_STRDUP_VAL\n *                     are ignored since the key/value tokens will always\n *                     be duplicated.\n * @return             0 on success, negative AVERROR code on failure\n */\nint av_dict_parse_string(AVDictionary **pm, const char *str,\n                         const char *key_val_sep, const char *pairs_sep,\n                         int flags);\n\n/**\n * Copy entries from one AVDictionary struct into another.\n * @param dst pointer to a pointer to a AVDictionary struct. If *dst is NULL,\n *            this function will allocate a struct for you and put it in *dst\n * @param src pointer to source AVDictionary struct\n * @param flags flags to use when setting entries in *dst\n * @note metadata is read using the AV_DICT_IGNORE_SUFFIX flag\n * @return 0 on success, negative AVERROR code on failure. If dst was allocated\n *           by this function, callers should free the associated memory.\n */\nint av_dict_copy(AVDictionary **dst, const AVDictionary *src, int flags);\n\n/**\n * Free all the memory allocated for an AVDictionary struct\n * and all keys and values.\n */\nvoid av_dict_free(AVDictionary **m);\n\n/**\n * Get dictionary entries as a string.\n *\n * Create a string containing dictionary's entries.\n * Such string may be passed back to av_dict_parse_string().\n * @note String is escaped with backslashes ('\\').\n *\n * @param[in]  m             dictionary\n * @param[out] buffer        Pointer to buffer that will be allocated with string containg entries.\n *                           Buffer must be freed by the caller when is no longer needed.\n * @param[in]  key_val_sep   character used to separate key from value\n * @param[in]  pairs_sep     character used to separate two pairs from each other\n * @return                   >= 0 on success, negative on error\n * @warning Separators cannot be neither '\\\\' nor '\\0'. They also cannot be the same.\n */\nint av_dict_get_string(const AVDictionary *m, char **buffer,\n                       const char key_val_sep, const char pairs_sep);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_DICT_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/display.h",
    "content": "/*\n * Copyright (c) 2014 Vittorio Giovara <vittorio.giovara@gmail.com>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * Display matrix\n */\n\n#ifndef AVUTIL_DISPLAY_H\n#define AVUTIL_DISPLAY_H\n\n#include <stdint.h>\n#include \"common.h\"\n\n/**\n * @addtogroup lavu_video\n * @{\n *\n * @defgroup lavu_video_display Display transformation matrix functions\n * @{\n */\n\n/**\n * @addtogroup lavu_video_display\n * The display transformation matrix specifies an affine transformation that\n * should be applied to video frames for correct presentation. It is compatible\n * with the matrices stored in the ISO/IEC 14496-12 container format.\n *\n * The data is a 3x3 matrix represented as a 9-element array:\n *\n * @code{.unparsed}\n *                                  | a b u |\n *   (a, b, u, c, d, v, x, y, w) -> | c d v |\n *                                  | x y w |\n * @endcode\n *\n * All numbers are stored in native endianness, as 16.16 fixed-point values,\n * except for u, v and w, which are stored as 2.30 fixed-point values.\n *\n * The transformation maps a point (p, q) in the source (pre-transformation)\n * frame to the point (p', q') in the destination (post-transformation) frame as\n * follows:\n *\n * @code{.unparsed}\n *               | a b u |\n *   (p, q, 1) . | c d v | = z * (p', q', 1)\n *               | x y w |\n * @endcode\n *\n * The transformation can also be more explicitly written in components as\n * follows:\n *\n * @code{.unparsed}\n *   p' = (a * p + c * q + x) / z;\n *   q' = (b * p + d * q + y) / z;\n *   z  =  u * p + v * q + w\n * @endcode\n */\n\n/**\n * Extract the rotation component of the transformation matrix.\n *\n * @param matrix the transformation matrix\n * @return the angle (in degrees) by which the transformation rotates the frame\n *         counterclockwise. The angle will be in range [-180.0, 180.0],\n *         or NaN if the matrix is singular.\n *\n * @note floating point numbers are inherently inexact, so callers are\n *       recommended to round the return value to nearest integer before use.\n */\ndouble av_display_rotation_get(const int32_t matrix[9]);\n\n/**\n * Initialize a transformation matrix describing a pure counterclockwise\n * rotation by the specified angle (in degrees).\n *\n * @param matrix an allocated transformation matrix (will be fully overwritten\n *               by this function)\n * @param angle rotation angle in degrees.\n */\nvoid av_display_rotation_set(int32_t matrix[9], double angle);\n\n/**\n * Flip the input matrix horizontally and/or vertically.\n *\n * @param matrix an allocated transformation matrix\n * @param hflip whether the matrix should be flipped horizontally\n * @param vflip whether the matrix should be flipped vertically\n */\nvoid av_display_matrix_flip(int32_t matrix[9], int hflip, int vflip);\n\n/**\n * @}\n * @}\n */\n\n#endif /* AVUTIL_DISPLAY_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/downmix_info.h",
    "content": "/*\n * Copyright (c) 2014 Tim Walker <tdskywalker@gmail.com>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_DOWNMIX_INFO_H\n#define AVUTIL_DOWNMIX_INFO_H\n\n#include \"frame.h\"\n\n/**\n * @file\n * audio downmix medatata\n */\n\n/**\n * @addtogroup lavu_audio\n * @{\n */\n\n/**\n * @defgroup downmix_info Audio downmix metadata\n * @{\n */\n\n/**\n * Possible downmix types.\n */\nenum AVDownmixType {\n    AV_DOWNMIX_TYPE_UNKNOWN, /**< Not indicated. */\n    AV_DOWNMIX_TYPE_LORO,    /**< Lo/Ro 2-channel downmix (Stereo). */\n    AV_DOWNMIX_TYPE_LTRT,    /**< Lt/Rt 2-channel downmix, Dolby Surround compatible. */\n    AV_DOWNMIX_TYPE_DPLII,   /**< Lt/Rt 2-channel downmix, Dolby Pro Logic II compatible. */\n    AV_DOWNMIX_TYPE_NB       /**< Number of downmix types. Not part of ABI. */\n};\n\n/**\n * This structure describes optional metadata relevant to a downmix procedure.\n *\n * All fields are set by the decoder to the value indicated in the audio\n * bitstream (if present), or to a \"sane\" default otherwise.\n */\ntypedef struct AVDownmixInfo {\n    /**\n     * Type of downmix preferred by the mastering engineer.\n     */\n    enum AVDownmixType preferred_downmix_type;\n\n    /**\n     * Absolute scale factor representing the nominal level of the center\n     * channel during a regular downmix.\n     */\n    double center_mix_level;\n\n    /**\n     * Absolute scale factor representing the nominal level of the center\n     * channel during an Lt/Rt compatible downmix.\n     */\n    double center_mix_level_ltrt;\n\n    /**\n     * Absolute scale factor representing the nominal level of the surround\n     * channels during a regular downmix.\n     */\n    double surround_mix_level;\n\n    /**\n     * Absolute scale factor representing the nominal level of the surround\n     * channels during an Lt/Rt compatible downmix.\n     */\n    double surround_mix_level_ltrt;\n\n    /**\n     * Absolute scale factor representing the level at which the LFE data is\n     * mixed into L/R channels during downmixing.\n     */\n    double lfe_mix_level;\n} AVDownmixInfo;\n\n/**\n * Get a frame's AV_FRAME_DATA_DOWNMIX_INFO side data for editing.\n *\n * If the side data is absent, it is created and added to the frame.\n *\n * @param frame the frame for which the side data is to be obtained or created\n *\n * @return the AVDownmixInfo structure to be edited by the caller, or NULL if\n *         the structure cannot be allocated.\n */\nAVDownmixInfo *av_downmix_info_update_side_data(AVFrame *frame);\n\n/**\n * @}\n */\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_DOWNMIX_INFO_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/encryption_info.h",
    "content": "/**\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_ENCRYPTION_INFO_H\n#define AVUTIL_ENCRYPTION_INFO_H\n\n#include <stddef.h>\n#include <stdint.h>\n\ntypedef struct AVSubsampleEncryptionInfo {\n    /** The number of bytes that are clear. */\n    unsigned int bytes_of_clear_data;\n\n    /**\n     * The number of bytes that are protected.  If using pattern encryption,\n     * the pattern applies to only the protected bytes; if not using pattern\n     * encryption, all these bytes are encrypted.\n     */\n    unsigned int bytes_of_protected_data;\n} AVSubsampleEncryptionInfo;\n\n/**\n * This describes encryption info for a packet.  This contains frame-specific\n * info for how to decrypt the packet before passing it to the decoder.\n *\n * The size of this struct is not part of the public ABI.\n */\ntypedef struct AVEncryptionInfo {\n    /** The fourcc encryption scheme. */\n    uint32_t scheme;\n\n    /**\n     * Only used for pattern encryption.  This is the number of 16-byte blocks\n     * that are encrypted.\n     */\n    uint32_t crypt_byte_block;\n\n    /**\n     * Only used for pattern encryption.  This is the number of 16-byte blocks\n     * that are clear.\n     */\n    uint32_t skip_byte_block;\n\n    /**\n     * The ID of the key used to encrypt the packet.  This should always be\n     * 16 bytes long, but may be changed in the future.\n     */\n    uint8_t *key_id;\n    uint32_t key_id_size;\n\n    /**\n     * The initialization vector.  This may have been zero-filled to be the\n     * correct block size.  This should always be 16 bytes long, but may be\n     * changed in the future.\n     */\n    uint8_t *iv;\n    uint32_t iv_size;\n\n    /**\n     * An array of subsample encryption info specifying how parts of the sample\n     * are encrypted.  If there are no subsamples, then the whole sample is\n     * encrypted.\n     */\n    AVSubsampleEncryptionInfo *subsamples;\n    uint32_t subsample_count;\n} AVEncryptionInfo;\n\n/**\n * This describes info used to initialize an encryption key system.\n *\n * The size of this struct is not part of the public ABI.\n */\ntypedef struct AVEncryptionInitInfo {\n    /**\n     * A unique identifier for the key system this is for, can be NULL if it\n     * is not known.  This should always be 16 bytes, but may change in the\n     * future.\n     */\n    uint8_t* system_id;\n    uint32_t system_id_size;\n\n    /**\n     * An array of key IDs this initialization data is for.  All IDs are the\n     * same length.  Can be NULL if there are no known key IDs.\n     */\n    uint8_t** key_ids;\n    /** The number of key IDs. */\n    uint32_t num_key_ids;\n    /**\n     * The number of bytes in each key ID.  This should always be 16, but may\n     * change in the future.\n     */\n    uint32_t key_id_size;\n\n    /**\n     * Key-system specific initialization data.  This data is copied directly\n     * from the file and the format depends on the specific key system.  This\n     * can be NULL if there is no initialization data; in that case, there\n     * will be at least one key ID.\n     */\n    uint8_t* data;\n    uint32_t data_size;\n} AVEncryptionInitInfo;\n\n/**\n * Allocates an AVEncryptionInfo structure and sub-pointers to hold the given\n * number of subsamples.  This will allocate pointers for the key ID, IV,\n * and subsample entries, set the size members, and zero-initialize the rest.\n *\n * @param subsample_count The number of subsamples.\n * @param key_id_size The number of bytes in the key ID, should be 16.\n * @param key_id_size The number of bytes in the IV, should be 16.\n *\n * @return The new AVEncryptionInfo structure, or NULL on error.\n */\nAVEncryptionInfo *av_encryption_info_alloc(uint32_t subsample_count, uint32_t key_id_size, uint32_t iv_size);\n\n/**\n * Allocates an AVEncryptionInfo structure with a copy of the given data.\n * @return The new AVEncryptionInfo structure, or NULL on error.\n */\nAVEncryptionInfo *av_encryption_info_clone(const AVEncryptionInfo *info);\n\n/**\n * Frees the given encryption info object.  This MUST NOT be used to free the\n * side-data data pointer, that should use normal side-data methods.\n */\nvoid av_encryption_info_free(AVEncryptionInfo *info);\n\n/**\n * Creates a copy of the AVEncryptionInfo that is contained in the given side\n * data.  The resulting object should be passed to av_encryption_info_free()\n * when done.\n *\n * @return The new AVEncryptionInfo structure, or NULL on error.\n */\nAVEncryptionInfo *av_encryption_info_get_side_data(const uint8_t *side_data, size_t side_data_size);\n\n/**\n * Allocates and initializes side data that holds a copy of the given encryption\n * info.  The resulting pointer should be either freed using av_free or given\n * to av_packet_add_side_data().\n *\n * @return The new side-data pointer, or NULL.\n */\nuint8_t *av_encryption_info_add_side_data(\n      const AVEncryptionInfo *info, size_t *side_data_size);\n\n\n/**\n * Allocates an AVEncryptionInitInfo structure and sub-pointers to hold the\n * given sizes.  This will allocate pointers and set all the fields.\n *\n * @return The new AVEncryptionInitInfo structure, or NULL on error.\n */\nAVEncryptionInitInfo *av_encryption_init_info_alloc(\n    uint32_t system_id_size, uint32_t num_key_ids, uint32_t key_id_size, uint32_t data_size);\n\n/**\n * Frees the given encryption init info object.  This MUST NOT be used to free\n * the side-data data pointer, that should use normal side-data methods.\n */\nvoid av_encryption_init_info_free(AVEncryptionInitInfo* info);\n\n/**\n * Creates a copy of the AVEncryptionInitInfo that is contained in the given\n * side data.  The resulting object should be passed to\n * av_encryption_init_info_free() when done.\n *\n * @return The new AVEncryptionInitInfo structure, or NULL on error.\n */\nAVEncryptionInitInfo *av_encryption_init_info_get_side_data(\n    const uint8_t* side_data, size_t side_data_size);\n\n/**\n * Allocates and initializes side data that holds a copy of the given encryption\n * init info.  The resulting pointer should be either freed using av_free or\n * given to av_packet_add_side_data().\n *\n * @return The new side-data pointer, or NULL.\n */\nuint8_t *av_encryption_init_info_add_side_data(\n    const AVEncryptionInitInfo *info, size_t *side_data_size);\n\n#endif /* AVUTIL_ENCRYPTION_INFO_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/error.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * error code definitions\n */\n\n#ifndef AVUTIL_ERROR_H\n#define AVUTIL_ERROR_H\n\n#include <errno.h>\n#include <stddef.h>\n\n/**\n * @addtogroup lavu_error\n *\n * @{\n */\n\n\n/* error handling */\n#if EDOM > 0\n#define AVERROR(e) (-(e))   ///< Returns a negative error code from a POSIX error code, to return from library functions.\n#define AVUNERROR(e) (-(e)) ///< Returns a POSIX error code from a library function error return value.\n#else\n/* Some platforms have E* and errno already negated. */\n#define AVERROR(e) (e)\n#define AVUNERROR(e) (e)\n#endif\n\n#define FFERRTAG(a, b, c, d) (-(int)MKTAG(a, b, c, d))\n\n#define AVERROR_BSF_NOT_FOUND      FFERRTAG(0xF8,'B','S','F') ///< Bitstream filter not found\n#define AVERROR_BUG                FFERRTAG( 'B','U','G','!') ///< Internal bug, also see AVERROR_BUG2\n#define AVERROR_BUFFER_TOO_SMALL   FFERRTAG( 'B','U','F','S') ///< Buffer too small\n#define AVERROR_DECODER_NOT_FOUND  FFERRTAG(0xF8,'D','E','C') ///< Decoder not found\n#define AVERROR_DEMUXER_NOT_FOUND  FFERRTAG(0xF8,'D','E','M') ///< Demuxer not found\n#define AVERROR_ENCODER_NOT_FOUND  FFERRTAG(0xF8,'E','N','C') ///< Encoder not found\n#define AVERROR_EOF                FFERRTAG( 'E','O','F',' ') ///< End of file\n#define AVERROR_EXIT               FFERRTAG( 'E','X','I','T') ///< Immediate exit was requested; the called function should not be restarted\n#define AVERROR_EXTERNAL           FFERRTAG( 'E','X','T',' ') ///< Generic error in an external library\n#define AVERROR_FILTER_NOT_FOUND   FFERRTAG(0xF8,'F','I','L') ///< Filter not found\n#define AVERROR_INVALIDDATA        FFERRTAG( 'I','N','D','A') ///< Invalid data found when processing input\n#define AVERROR_MUXER_NOT_FOUND    FFERRTAG(0xF8,'M','U','X') ///< Muxer not found\n#define AVERROR_OPTION_NOT_FOUND   FFERRTAG(0xF8,'O','P','T') ///< Option not found\n#define AVERROR_PATCHWELCOME       FFERRTAG( 'P','A','W','E') ///< Not yet implemented in FFmpeg, patches welcome\n#define AVERROR_PROTOCOL_NOT_FOUND FFERRTAG(0xF8,'P','R','O') ///< Protocol not found\n\n#define AVERROR_STREAM_NOT_FOUND   FFERRTAG(0xF8,'S','T','R') ///< Stream not found\n/**\n * This is semantically identical to AVERROR_BUG\n * it has been introduced in Libav after our AVERROR_BUG and with a modified value.\n */\n#define AVERROR_BUG2               FFERRTAG( 'B','U','G',' ')\n#define AVERROR_UNKNOWN            FFERRTAG( 'U','N','K','N') ///< Unknown error, typically from an external library\n#define AVERROR_EXPERIMENTAL       (-0x2bb2afa8) ///< Requested feature is flagged experimental. Set strict_std_compliance if you really want to use it.\n#define AVERROR_INPUT_CHANGED      (-0x636e6701) ///< Input changed between calls. Reconfiguration is required. (can be OR-ed with AVERROR_OUTPUT_CHANGED)\n#define AVERROR_OUTPUT_CHANGED     (-0x636e6702) ///< Output changed between calls. Reconfiguration is required. (can be OR-ed with AVERROR_INPUT_CHANGED)\n/* HTTP & RTSP errors */\n#define AVERROR_HTTP_BAD_REQUEST   FFERRTAG(0xF8,'4','0','0')\n#define AVERROR_HTTP_UNAUTHORIZED  FFERRTAG(0xF8,'4','0','1')\n#define AVERROR_HTTP_FORBIDDEN     FFERRTAG(0xF8,'4','0','3')\n#define AVERROR_HTTP_NOT_FOUND     FFERRTAG(0xF8,'4','0','4')\n#define AVERROR_HTTP_OTHER_4XX     FFERRTAG(0xF8,'4','X','X')\n#define AVERROR_HTTP_SERVER_ERROR  FFERRTAG(0xF8,'5','X','X')\n\n#define AV_ERROR_MAX_STRING_SIZE 64\n\n/**\n * Put a description of the AVERROR code errnum in errbuf.\n * In case of failure the global variable errno is set to indicate the\n * error. Even in case of failure av_strerror() will print a generic\n * error message indicating the errnum provided to errbuf.\n *\n * @param errnum      error code to describe\n * @param errbuf      buffer to which description is written\n * @param errbuf_size the size in bytes of errbuf\n * @return 0 on success, a negative value if a description for errnum\n * cannot be found\n */\nint av_strerror(int errnum, char *errbuf, size_t errbuf_size);\n\n/**\n * Fill the provided buffer with a string containing an error string\n * corresponding to the AVERROR code errnum.\n *\n * @param errbuf         a buffer\n * @param errbuf_size    size in bytes of errbuf\n * @param errnum         error code to describe\n * @return the buffer in input, filled with the error description\n * @see av_strerror()\n */\nstatic inline char *av_make_error_string(char *errbuf, size_t errbuf_size, int errnum)\n{\n    av_strerror(errnum, errbuf, errbuf_size);\n    return errbuf;\n}\n\n/**\n * Convenience macro, the return value should be used only directly in\n * function arguments but never stand-alone.\n */\n#define av_err2str(errnum) \\\n    av_make_error_string((char[AV_ERROR_MAX_STRING_SIZE]){0}, AV_ERROR_MAX_STRING_SIZE, errnum)\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_ERROR_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/eval.h",
    "content": "/*\n * Copyright (c) 2002 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * simple arithmetic expression evaluator\n */\n\n#ifndef AVUTIL_EVAL_H\n#define AVUTIL_EVAL_H\n\n#include \"avutil.h\"\n\ntypedef struct AVExpr AVExpr;\n\n/**\n * Parse and evaluate an expression.\n * Note, this is significantly slower than av_expr_eval().\n *\n * @param res a pointer to a double where is put the result value of\n * the expression, or NAN in case of error\n * @param s expression as a zero terminated string, for example \"1+2^3+5*5+sin(2/3)\"\n * @param const_names NULL terminated array of zero terminated strings of constant identifiers, for example {\"PI\", \"E\", 0}\n * @param const_values a zero terminated array of values for the identifiers from const_names\n * @param func1_names NULL terminated array of zero terminated strings of funcs1 identifiers\n * @param funcs1 NULL terminated array of function pointers for functions which take 1 argument\n * @param func2_names NULL terminated array of zero terminated strings of funcs2 identifiers\n * @param funcs2 NULL terminated array of function pointers for functions which take 2 arguments\n * @param opaque a pointer which will be passed to all functions from funcs1 and funcs2\n * @param log_ctx parent logging context\n * @return >= 0 in case of success, a negative value corresponding to an\n * AVERROR code otherwise\n */\nint av_expr_parse_and_eval(double *res, const char *s,\n                           const char * const *const_names, const double *const_values,\n                           const char * const *func1_names, double (* const *funcs1)(void *, double),\n                           const char * const *func2_names, double (* const *funcs2)(void *, double, double),\n                           void *opaque, int log_offset, void *log_ctx);\n\n/**\n * Parse an expression.\n *\n * @param expr a pointer where is put an AVExpr containing the parsed\n * value in case of successful parsing, or NULL otherwise.\n * The pointed to AVExpr must be freed with av_expr_free() by the user\n * when it is not needed anymore.\n * @param s expression as a zero terminated string, for example \"1+2^3+5*5+sin(2/3)\"\n * @param const_names NULL terminated array of zero terminated strings of constant identifiers, for example {\"PI\", \"E\", 0}\n * @param func1_names NULL terminated array of zero terminated strings of funcs1 identifiers\n * @param funcs1 NULL terminated array of function pointers for functions which take 1 argument\n * @param func2_names NULL terminated array of zero terminated strings of funcs2 identifiers\n * @param funcs2 NULL terminated array of function pointers for functions which take 2 arguments\n * @param log_ctx parent logging context\n * @return >= 0 in case of success, a negative value corresponding to an\n * AVERROR code otherwise\n */\nint av_expr_parse(AVExpr **expr, const char *s,\n                  const char * const *const_names,\n                  const char * const *func1_names, double (* const *funcs1)(void *, double),\n                  const char * const *func2_names, double (* const *funcs2)(void *, double, double),\n                  int log_offset, void *log_ctx);\n\n/**\n * Evaluate a previously parsed expression.\n *\n * @param const_values a zero terminated array of values for the identifiers from av_expr_parse() const_names\n * @param opaque a pointer which will be passed to all functions from funcs1 and funcs2\n * @return the value of the expression\n */\ndouble av_expr_eval(AVExpr *e, const double *const_values, void *opaque);\n\n/**\n * Free a parsed expression previously created with av_expr_parse().\n */\nvoid av_expr_free(AVExpr *e);\n\n/**\n * Parse the string in numstr and return its value as a double. If\n * the string is empty, contains only whitespaces, or does not contain\n * an initial substring that has the expected syntax for a\n * floating-point number, no conversion is performed. In this case,\n * returns a value of zero and the value returned in tail is the value\n * of numstr.\n *\n * @param numstr a string representing a number, may contain one of\n * the International System number postfixes, for example 'K', 'M',\n * 'G'. If 'i' is appended after the postfix, powers of 2 are used\n * instead of powers of 10. The 'B' postfix multiplies the value by\n * 8, and can be appended after another postfix or used alone. This\n * allows using for example 'KB', 'MiB', 'G' and 'B' as postfix.\n * @param tail if non-NULL puts here the pointer to the char next\n * after the last parsed character\n */\ndouble av_strtod(const char *numstr, char **tail);\n\n#endif /* AVUTIL_EVAL_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/ffversion.h",
    "content": "/* Automatically generated by version.sh, do not manually edit! */\n#ifndef AVUTIL_FFVERSION_H\n#define AVUTIL_FFVERSION_H\n#define FFMPEG_VERSION \"4.0.2\"\n#endif /* AVUTIL_FFVERSION_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/fifo.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * a very simple circular buffer FIFO implementation\n */\n\n#ifndef AVUTIL_FIFO_H\n#define AVUTIL_FIFO_H\n\n#include <stdint.h>\n#include \"avutil.h\"\n#include \"attributes.h\"\n\ntypedef struct AVFifoBuffer {\n    uint8_t *buffer;\n    uint8_t *rptr, *wptr, *end;\n    uint32_t rndx, wndx;\n} AVFifoBuffer;\n\n/**\n * Initialize an AVFifoBuffer.\n * @param size of FIFO\n * @return AVFifoBuffer or NULL in case of memory allocation failure\n */\nAVFifoBuffer *av_fifo_alloc(unsigned int size);\n\n/**\n * Initialize an AVFifoBuffer.\n * @param nmemb number of elements\n * @param size  size of the single element\n * @return AVFifoBuffer or NULL in case of memory allocation failure\n */\nAVFifoBuffer *av_fifo_alloc_array(size_t nmemb, size_t size);\n\n/**\n * Free an AVFifoBuffer.\n * @param f AVFifoBuffer to free\n */\nvoid av_fifo_free(AVFifoBuffer *f);\n\n/**\n * Free an AVFifoBuffer and reset pointer to NULL.\n * @param f AVFifoBuffer to free\n */\nvoid av_fifo_freep(AVFifoBuffer **f);\n\n/**\n * Reset the AVFifoBuffer to the state right after av_fifo_alloc, in particular it is emptied.\n * @param f AVFifoBuffer to reset\n */\nvoid av_fifo_reset(AVFifoBuffer *f);\n\n/**\n * Return the amount of data in bytes in the AVFifoBuffer, that is the\n * amount of data you can read from it.\n * @param f AVFifoBuffer to read from\n * @return size\n */\nint av_fifo_size(const AVFifoBuffer *f);\n\n/**\n * Return the amount of space in bytes in the AVFifoBuffer, that is the\n * amount of data you can write into it.\n * @param f AVFifoBuffer to write into\n * @return size\n */\nint av_fifo_space(const AVFifoBuffer *f);\n\n/**\n * Feed data at specific position from an AVFifoBuffer to a user-supplied callback.\n * Similar as av_fifo_gereric_read but without discarding data.\n * @param f AVFifoBuffer to read from\n * @param offset offset from current read position\n * @param buf_size number of bytes to read\n * @param func generic read function\n * @param dest data destination\n */\nint av_fifo_generic_peek_at(AVFifoBuffer *f, void *dest, int offset, int buf_size, void (*func)(void*, void*, int));\n\n/**\n * Feed data from an AVFifoBuffer to a user-supplied callback.\n * Similar as av_fifo_gereric_read but without discarding data.\n * @param f AVFifoBuffer to read from\n * @param buf_size number of bytes to read\n * @param func generic read function\n * @param dest data destination\n */\nint av_fifo_generic_peek(AVFifoBuffer *f, void *dest, int buf_size, void (*func)(void*, void*, int));\n\n/**\n * Feed data from an AVFifoBuffer to a user-supplied callback.\n * @param f AVFifoBuffer to read from\n * @param buf_size number of bytes to read\n * @param func generic read function\n * @param dest data destination\n */\nint av_fifo_generic_read(AVFifoBuffer *f, void *dest, int buf_size, void (*func)(void*, void*, int));\n\n/**\n * Feed data from a user-supplied callback to an AVFifoBuffer.\n * @param f AVFifoBuffer to write to\n * @param src data source; non-const since it may be used as a\n * modifiable context by the function defined in func\n * @param size number of bytes to write\n * @param func generic write function; the first parameter is src,\n * the second is dest_buf, the third is dest_buf_size.\n * func must return the number of bytes written to dest_buf, or <= 0 to\n * indicate no more data available to write.\n * If func is NULL, src is interpreted as a simple byte array for source data.\n * @return the number of bytes written to the FIFO\n */\nint av_fifo_generic_write(AVFifoBuffer *f, void *src, int size, int (*func)(void*, void*, int));\n\n/**\n * Resize an AVFifoBuffer.\n * In case of reallocation failure, the old FIFO is kept unchanged.\n *\n * @param f AVFifoBuffer to resize\n * @param size new AVFifoBuffer size in bytes\n * @return <0 for failure, >=0 otherwise\n */\nint av_fifo_realloc2(AVFifoBuffer *f, unsigned int size);\n\n/**\n * Enlarge an AVFifoBuffer.\n * In case of reallocation failure, the old FIFO is kept unchanged.\n * The new fifo size may be larger than the requested size.\n *\n * @param f AVFifoBuffer to resize\n * @param additional_space the amount of space in bytes to allocate in addition to av_fifo_size()\n * @return <0 for failure, >=0 otherwise\n */\nint av_fifo_grow(AVFifoBuffer *f, unsigned int additional_space);\n\n/**\n * Read and discard the specified amount of data from an AVFifoBuffer.\n * @param f AVFifoBuffer to read from\n * @param size amount of data to read in bytes\n */\nvoid av_fifo_drain(AVFifoBuffer *f, int size);\n\n/**\n * Return a pointer to the data stored in a FIFO buffer at a certain offset.\n * The FIFO buffer is not modified.\n *\n * @param f    AVFifoBuffer to peek at, f must be non-NULL\n * @param offs an offset in bytes, its absolute value must be less\n *             than the used buffer size or the returned pointer will\n *             point outside to the buffer data.\n *             The used buffer size can be checked with av_fifo_size().\n */\nstatic inline uint8_t *av_fifo_peek2(const AVFifoBuffer *f, int offs)\n{\n    uint8_t *ptr = f->rptr + offs;\n    if (ptr >= f->end)\n        ptr = f->buffer + (ptr - f->end);\n    else if (ptr < f->buffer)\n        ptr = f->end - (f->buffer - ptr);\n    return ptr;\n}\n\n#endif /* AVUTIL_FIFO_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/file.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_FILE_H\n#define AVUTIL_FILE_H\n\n#include <stdint.h>\n\n#include \"avutil.h\"\n\n/**\n * @file\n * Misc file utilities.\n */\n\n/**\n * Read the file with name filename, and put its content in a newly\n * allocated buffer or map it with mmap() when available.\n * In case of success set *bufptr to the read or mmapped buffer, and\n * *size to the size in bytes of the buffer in *bufptr.\n * The returned buffer must be released with av_file_unmap().\n *\n * @param log_offset loglevel offset used for logging\n * @param log_ctx context used for logging\n * @return a non negative number in case of success, a negative value\n * corresponding to an AVERROR error code in case of failure\n */\nav_warn_unused_result\nint av_file_map(const char *filename, uint8_t **bufptr, size_t *size,\n                int log_offset, void *log_ctx);\n\n/**\n * Unmap or free the buffer bufptr created by av_file_map().\n *\n * @param size size in bytes of bufptr, must be the same as returned\n * by av_file_map()\n */\nvoid av_file_unmap(uint8_t *bufptr, size_t size);\n\n/**\n * Wrapper to work around the lack of mkstemp() on mingw.\n * Also, tries to create file in /tmp first, if possible.\n * *prefix can be a character constant; *filename will be allocated internally.\n * @return file descriptor of opened file (or negative value corresponding to an\n * AVERROR code on error)\n * and opened file name in **filename.\n * @note On very old libcs it is necessary to set a secure umask before\n *       calling this, av_tempfile() can't call umask itself as it is used in\n *       libraries and could interfere with the calling application.\n * @deprecated as fd numbers cannot be passed saftely between libs on some platforms\n */\nint av_tempfile(const char *prefix, char **filename, int log_offset, void *log_ctx);\n\n#endif /* AVUTIL_FILE_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/frame.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * @ingroup lavu_frame\n * reference-counted frame API\n */\n\n#ifndef AVUTIL_FRAME_H\n#define AVUTIL_FRAME_H\n\n#include <stddef.h>\n#include <stdint.h>\n\n#include \"avutil.h\"\n#include \"buffer.h\"\n#include \"dict.h\"\n#include \"rational.h\"\n#include \"samplefmt.h\"\n#include \"pixfmt.h\"\n#include \"version.h\"\n\n\n/**\n * @defgroup lavu_frame AVFrame\n * @ingroup lavu_data\n *\n * @{\n * AVFrame is an abstraction for reference-counted raw multimedia data.\n */\n\nenum AVFrameSideDataType {\n    /**\n     * The data is the AVPanScan struct defined in libavcodec.\n     */\n    AV_FRAME_DATA_PANSCAN,\n    /**\n     * ATSC A53 Part 4 Closed Captions.\n     * A53 CC bitstream is stored as uint8_t in AVFrameSideData.data.\n     * The number of bytes of CC data is AVFrameSideData.size.\n     */\n    AV_FRAME_DATA_A53_CC,\n    /**\n     * Stereoscopic 3d metadata.\n     * The data is the AVStereo3D struct defined in libavutil/stereo3d.h.\n     */\n    AV_FRAME_DATA_STEREO3D,\n    /**\n     * The data is the AVMatrixEncoding enum defined in libavutil/channel_layout.h.\n     */\n    AV_FRAME_DATA_MATRIXENCODING,\n    /**\n     * Metadata relevant to a downmix procedure.\n     * The data is the AVDownmixInfo struct defined in libavutil/downmix_info.h.\n     */\n    AV_FRAME_DATA_DOWNMIX_INFO,\n    /**\n     * ReplayGain information in the form of the AVReplayGain struct.\n     */\n    AV_FRAME_DATA_REPLAYGAIN,\n    /**\n     * This side data contains a 3x3 transformation matrix describing an affine\n     * transformation that needs to be applied to the frame for correct\n     * presentation.\n     *\n     * See libavutil/display.h for a detailed description of the data.\n     */\n    AV_FRAME_DATA_DISPLAYMATRIX,\n    /**\n     * Active Format Description data consisting of a single byte as specified\n     * in ETSI TS 101 154 using AVActiveFormatDescription enum.\n     */\n    AV_FRAME_DATA_AFD,\n    /**\n     * Motion vectors exported by some codecs (on demand through the export_mvs\n     * flag set in the libavcodec AVCodecContext flags2 option).\n     * The data is the AVMotionVector struct defined in\n     * libavutil/motion_vector.h.\n     */\n    AV_FRAME_DATA_MOTION_VECTORS,\n    /**\n     * Recommmends skipping the specified number of samples. This is exported\n     * only if the \"skip_manual\" AVOption is set in libavcodec.\n     * This has the same format as AV_PKT_DATA_SKIP_SAMPLES.\n     * @code\n     * u32le number of samples to skip from start of this packet\n     * u32le number of samples to skip from end of this packet\n     * u8    reason for start skip\n     * u8    reason for end   skip (0=padding silence, 1=convergence)\n     * @endcode\n     */\n    AV_FRAME_DATA_SKIP_SAMPLES,\n    /**\n     * This side data must be associated with an audio frame and corresponds to\n     * enum AVAudioServiceType defined in avcodec.h.\n     */\n    AV_FRAME_DATA_AUDIO_SERVICE_TYPE,\n    /**\n     * Mastering display metadata associated with a video frame. The payload is\n     * an AVMasteringDisplayMetadata type and contains information about the\n     * mastering display color volume.\n     */\n    AV_FRAME_DATA_MASTERING_DISPLAY_METADATA,\n    /**\n     * The GOP timecode in 25 bit timecode format. Data format is 64-bit integer.\n     * This is set on the first frame of a GOP that has a temporal reference of 0.\n     */\n    AV_FRAME_DATA_GOP_TIMECODE,\n\n    /**\n     * The data represents the AVSphericalMapping structure defined in\n     * libavutil/spherical.h.\n     */\n    AV_FRAME_DATA_SPHERICAL,\n\n    /**\n     * Content light level (based on CTA-861.3). This payload contains data in\n     * the form of the AVContentLightMetadata struct.\n     */\n    AV_FRAME_DATA_CONTENT_LIGHT_LEVEL,\n\n    /**\n     * The data contains an ICC profile as an opaque octet buffer following the\n     * format described by ISO 15076-1 with an optional name defined in the\n     * metadata key entry \"name\".\n     */\n    AV_FRAME_DATA_ICC_PROFILE,\n\n#if FF_API_FRAME_QP\n    /**\n     * Implementation-specific description of the format of AV_FRAME_QP_TABLE_DATA.\n     * The contents of this side data are undocumented and internal; use\n     * av_frame_set_qp_table() and av_frame_get_qp_table() to access this in a\n     * meaningful way instead.\n     */\n    AV_FRAME_DATA_QP_TABLE_PROPERTIES,\n\n    /**\n     * Raw QP table data. Its format is described by\n     * AV_FRAME_DATA_QP_TABLE_PROPERTIES. Use av_frame_set_qp_table() and\n     * av_frame_get_qp_table() to access this instead.\n     */\n    AV_FRAME_DATA_QP_TABLE_DATA,\n#endif\n};\n\nenum AVActiveFormatDescription {\n    AV_AFD_SAME         = 8,\n    AV_AFD_4_3          = 9,\n    AV_AFD_16_9         = 10,\n    AV_AFD_14_9         = 11,\n    AV_AFD_4_3_SP_14_9  = 13,\n    AV_AFD_16_9_SP_14_9 = 14,\n    AV_AFD_SP_4_3       = 15,\n};\n\n\n/**\n * Structure to hold side data for an AVFrame.\n *\n * sizeof(AVFrameSideData) is not a part of the public ABI, so new fields may be added\n * to the end with a minor bump.\n */\ntypedef struct AVFrameSideData {\n    enum AVFrameSideDataType type;\n    uint8_t *data;\n    int      size;\n    AVDictionary *metadata;\n    AVBufferRef *buf;\n} AVFrameSideData;\n\n/**\n * This structure describes decoded (raw) audio or video data.\n *\n * AVFrame must be allocated using av_frame_alloc(). Note that this only\n * allocates the AVFrame itself, the buffers for the data must be managed\n * through other means (see below).\n * AVFrame must be freed with av_frame_free().\n *\n * AVFrame is typically allocated once and then reused multiple times to hold\n * different data (e.g. a single AVFrame to hold frames received from a\n * decoder). In such a case, av_frame_unref() will free any references held by\n * the frame and reset it to its original clean state before it\n * is reused again.\n *\n * The data described by an AVFrame is usually reference counted through the\n * AVBuffer API. The underlying buffer references are stored in AVFrame.buf /\n * AVFrame.extended_buf. An AVFrame is considered to be reference counted if at\n * least one reference is set, i.e. if AVFrame.buf[0] != NULL. In such a case,\n * every single data plane must be contained in one of the buffers in\n * AVFrame.buf or AVFrame.extended_buf.\n * There may be a single buffer for all the data, or one separate buffer for\n * each plane, or anything in between.\n *\n * sizeof(AVFrame) is not a part of the public ABI, so new fields may be added\n * to the end with a minor bump.\n *\n * Fields can be accessed through AVOptions, the name string used, matches the\n * C structure field name for fields accessible through AVOptions. The AVClass\n * for AVFrame can be obtained from avcodec_get_frame_class()\n */\ntypedef struct AVFrame {\n#define AV_NUM_DATA_POINTERS 8\n    /**\n     * pointer to the picture/channel planes.\n     * This might be different from the first allocated byte\n     *\n     * Some decoders access areas outside 0,0 - width,height, please\n     * see avcodec_align_dimensions2(). Some filters and swscale can read\n     * up to 16 bytes beyond the planes, if these filters are to be used,\n     * then 16 extra bytes must be allocated.\n     *\n     * NOTE: Except for hwaccel formats, pointers not needed by the format\n     * MUST be set to NULL.\n     */\n    uint8_t *data[AV_NUM_DATA_POINTERS];\n\n    /**\n     * For video, size in bytes of each picture line.\n     * For audio, size in bytes of each plane.\n     *\n     * For audio, only linesize[0] may be set. For planar audio, each channel\n     * plane must be the same size.\n     *\n     * For video the linesizes should be multiples of the CPUs alignment\n     * preference, this is 16 or 32 for modern desktop CPUs.\n     * Some code requires such alignment other code can be slower without\n     * correct alignment, for yet other it makes no difference.\n     *\n     * @note The linesize may be larger than the size of usable data -- there\n     * may be extra padding present for performance reasons.\n     */\n    int linesize[AV_NUM_DATA_POINTERS];\n\n    /**\n     * pointers to the data planes/channels.\n     *\n     * For video, this should simply point to data[].\n     *\n     * For planar audio, each channel has a separate data pointer, and\n     * linesize[0] contains the size of each channel buffer.\n     * For packed audio, there is just one data pointer, and linesize[0]\n     * contains the total size of the buffer for all channels.\n     *\n     * Note: Both data and extended_data should always be set in a valid frame,\n     * but for planar audio with more channels that can fit in data,\n     * extended_data must be used in order to access all channels.\n     */\n    uint8_t **extended_data;\n\n    /**\n     * @name Video dimensions\n     * Video frames only. The coded dimensions (in pixels) of the video frame,\n     * i.e. the size of the rectangle that contains some well-defined values.\n     *\n     * @note The part of the frame intended for display/presentation is further\n     * restricted by the @ref cropping \"Cropping rectangle\".\n     * @{\n     */\n    int width, height;\n    /**\n     * @}\n     */\n\n    /**\n     * number of audio samples (per channel) described by this frame\n     */\n    int nb_samples;\n\n    /**\n     * format of the frame, -1 if unknown or unset\n     * Values correspond to enum AVPixelFormat for video frames,\n     * enum AVSampleFormat for audio)\n     */\n    int format;\n\n    /**\n     * 1 -> keyframe, 0-> not\n     */\n    int key_frame;\n\n    /**\n     * Picture type of the frame.\n     */\n    enum AVPictureType pict_type;\n\n    /**\n     * Sample aspect ratio for the video frame, 0/1 if unknown/unspecified.\n     */\n    AVRational sample_aspect_ratio;\n\n    /**\n     * Presentation timestamp in time_base units (time when frame should be shown to user).\n     */\n    int64_t pts;\n\n#if FF_API_PKT_PTS\n    /**\n     * PTS copied from the AVPacket that was decoded to produce this frame.\n     * @deprecated use the pts field instead\n     */\n    attribute_deprecated\n    int64_t pkt_pts;\n#endif\n\n    /**\n     * DTS copied from the AVPacket that triggered returning this frame. (if frame threading isn't used)\n     * This is also the Presentation time of this AVFrame calculated from\n     * only AVPacket.dts values without pts values.\n     */\n    int64_t pkt_dts;\n\n    /**\n     * picture number in bitstream order\n     */\n    int coded_picture_number;\n    /**\n     * picture number in display order\n     */\n    int display_picture_number;\n\n    /**\n     * quality (between 1 (good) and FF_LAMBDA_MAX (bad))\n     */\n    int quality;\n\n    /**\n     * for some private data of the user\n     */\n    void *opaque;\n\n#if FF_API_ERROR_FRAME\n    /**\n     * @deprecated unused\n     */\n    attribute_deprecated\n    uint64_t error[AV_NUM_DATA_POINTERS];\n#endif\n\n    /**\n     * When decoding, this signals how much the picture must be delayed.\n     * extra_delay = repeat_pict / (2*fps)\n     */\n    int repeat_pict;\n\n    /**\n     * The content of the picture is interlaced.\n     */\n    int interlaced_frame;\n\n    /**\n     * If the content is interlaced, is top field displayed first.\n     */\n    int top_field_first;\n\n    /**\n     * Tell user application that palette has changed from previous frame.\n     */\n    int palette_has_changed;\n\n    /**\n     * reordered opaque 64 bits (generally an integer or a double precision float\n     * PTS but can be anything).\n     * The user sets AVCodecContext.reordered_opaque to represent the input at\n     * that time,\n     * the decoder reorders values as needed and sets AVFrame.reordered_opaque\n     * to exactly one of the values provided by the user through AVCodecContext.reordered_opaque\n     * @deprecated in favor of pkt_pts\n     */\n    int64_t reordered_opaque;\n\n    /**\n     * Sample rate of the audio data.\n     */\n    int sample_rate;\n\n    /**\n     * Channel layout of the audio data.\n     */\n    uint64_t channel_layout;\n\n    /**\n     * AVBuffer references backing the data for this frame. If all elements of\n     * this array are NULL, then this frame is not reference counted. This array\n     * must be filled contiguously -- if buf[i] is non-NULL then buf[j] must\n     * also be non-NULL for all j < i.\n     *\n     * There may be at most one AVBuffer per data plane, so for video this array\n     * always contains all the references. For planar audio with more than\n     * AV_NUM_DATA_POINTERS channels, there may be more buffers than can fit in\n     * this array. Then the extra AVBufferRef pointers are stored in the\n     * extended_buf array.\n     */\n    AVBufferRef *buf[AV_NUM_DATA_POINTERS];\n\n    /**\n     * For planar audio which requires more than AV_NUM_DATA_POINTERS\n     * AVBufferRef pointers, this array will hold all the references which\n     * cannot fit into AVFrame.buf.\n     *\n     * Note that this is different from AVFrame.extended_data, which always\n     * contains all the pointers. This array only contains the extra pointers,\n     * which cannot fit into AVFrame.buf.\n     *\n     * This array is always allocated using av_malloc() by whoever constructs\n     * the frame. It is freed in av_frame_unref().\n     */\n    AVBufferRef **extended_buf;\n    /**\n     * Number of elements in extended_buf.\n     */\n    int        nb_extended_buf;\n\n    AVFrameSideData **side_data;\n    int            nb_side_data;\n\n/**\n * @defgroup lavu_frame_flags AV_FRAME_FLAGS\n * @ingroup lavu_frame\n * Flags describing additional frame properties.\n *\n * @{\n */\n\n/**\n * The frame data may be corrupted, e.g. due to decoding errors.\n */\n#define AV_FRAME_FLAG_CORRUPT       (1 << 0)\n/**\n * A flag to mark the frames which need to be decoded, but shouldn't be output.\n */\n#define AV_FRAME_FLAG_DISCARD   (1 << 2)\n/**\n * @}\n */\n\n    /**\n     * Frame flags, a combination of @ref lavu_frame_flags\n     */\n    int flags;\n\n    /**\n     * MPEG vs JPEG YUV range.\n     * - encoding: Set by user\n     * - decoding: Set by libavcodec\n     */\n    enum AVColorRange color_range;\n\n    enum AVColorPrimaries color_primaries;\n\n    enum AVColorTransferCharacteristic color_trc;\n\n    /**\n     * YUV colorspace type.\n     * - encoding: Set by user\n     * - decoding: Set by libavcodec\n     */\n    enum AVColorSpace colorspace;\n\n    enum AVChromaLocation chroma_location;\n\n    /**\n     * frame timestamp estimated using various heuristics, in stream time base\n     * - encoding: unused\n     * - decoding: set by libavcodec, read by user.\n     */\n    int64_t best_effort_timestamp;\n\n    /**\n     * reordered pos from the last AVPacket that has been input into the decoder\n     * - encoding: unused\n     * - decoding: Read by user.\n     */\n    int64_t pkt_pos;\n\n    /**\n     * duration of the corresponding packet, expressed in\n     * AVStream->time_base units, 0 if unknown.\n     * - encoding: unused\n     * - decoding: Read by user.\n     */\n    int64_t pkt_duration;\n\n    /**\n     * metadata.\n     * - encoding: Set by user.\n     * - decoding: Set by libavcodec.\n     */\n    AVDictionary *metadata;\n\n    /**\n     * decode error flags of the frame, set to a combination of\n     * FF_DECODE_ERROR_xxx flags if the decoder produced a frame, but there\n     * were errors during the decoding.\n     * - encoding: unused\n     * - decoding: set by libavcodec, read by user.\n     */\n    int decode_error_flags;\n#define FF_DECODE_ERROR_INVALID_BITSTREAM   1\n#define FF_DECODE_ERROR_MISSING_REFERENCE   2\n\n    /**\n     * number of audio channels, only used for audio.\n     * - encoding: unused\n     * - decoding: Read by user.\n     */\n    int channels;\n\n    /**\n     * size of the corresponding packet containing the compressed\n     * frame.\n     * It is set to a negative value if unknown.\n     * - encoding: unused\n     * - decoding: set by libavcodec, read by user.\n     */\n    int pkt_size;\n\n#if FF_API_FRAME_QP\n    /**\n     * QP table\n     */\n    attribute_deprecated\n    int8_t *qscale_table;\n    /**\n     * QP store stride\n     */\n    attribute_deprecated\n    int qstride;\n\n    attribute_deprecated\n    int qscale_type;\n\n    attribute_deprecated\n    AVBufferRef *qp_table_buf;\n#endif\n    /**\n     * For hwaccel-format frames, this should be a reference to the\n     * AVHWFramesContext describing the frame.\n     */\n    AVBufferRef *hw_frames_ctx;\n\n    /**\n     * AVBufferRef for free use by the API user. FFmpeg will never check the\n     * contents of the buffer ref. FFmpeg calls av_buffer_unref() on it when\n     * the frame is unreferenced. av_frame_copy_props() calls create a new\n     * reference with av_buffer_ref() for the target frame's opaque_ref field.\n     *\n     * This is unrelated to the opaque field, although it serves a similar\n     * purpose.\n     */\n    AVBufferRef *opaque_ref;\n\n    /**\n     * @anchor cropping\n     * @name Cropping\n     * Video frames only. The number of pixels to discard from the the\n     * top/bottom/left/right border of the frame to obtain the sub-rectangle of\n     * the frame intended for presentation.\n     * @{\n     */\n    size_t crop_top;\n    size_t crop_bottom;\n    size_t crop_left;\n    size_t crop_right;\n    /**\n     * @}\n     */\n\n    /**\n     * AVBufferRef for internal use by a single libav* library.\n     * Must not be used to transfer data between libraries.\n     * Has to be NULL when ownership of the frame leaves the respective library.\n     *\n     * Code outside the FFmpeg libs should never check or change the contents of the buffer ref.\n     *\n     * FFmpeg calls av_buffer_unref() on it when the frame is unreferenced.\n     * av_frame_copy_props() calls create a new reference with av_buffer_ref()\n     * for the target frame's private_ref field.\n     */\n    AVBufferRef *private_ref;\n} AVFrame;\n\n#if FF_API_FRAME_GET_SET\n/**\n * Accessors for some AVFrame fields. These used to be provided for ABI\n * compatibility, and do not need to be used anymore.\n */\nattribute_deprecated\nint64_t av_frame_get_best_effort_timestamp(const AVFrame *frame);\nattribute_deprecated\nvoid    av_frame_set_best_effort_timestamp(AVFrame *frame, int64_t val);\nattribute_deprecated\nint64_t av_frame_get_pkt_duration         (const AVFrame *frame);\nattribute_deprecated\nvoid    av_frame_set_pkt_duration         (AVFrame *frame, int64_t val);\nattribute_deprecated\nint64_t av_frame_get_pkt_pos              (const AVFrame *frame);\nattribute_deprecated\nvoid    av_frame_set_pkt_pos              (AVFrame *frame, int64_t val);\nattribute_deprecated\nint64_t av_frame_get_channel_layout       (const AVFrame *frame);\nattribute_deprecated\nvoid    av_frame_set_channel_layout       (AVFrame *frame, int64_t val);\nattribute_deprecated\nint     av_frame_get_channels             (const AVFrame *frame);\nattribute_deprecated\nvoid    av_frame_set_channels             (AVFrame *frame, int     val);\nattribute_deprecated\nint     av_frame_get_sample_rate          (const AVFrame *frame);\nattribute_deprecated\nvoid    av_frame_set_sample_rate          (AVFrame *frame, int     val);\nattribute_deprecated\nAVDictionary *av_frame_get_metadata       (const AVFrame *frame);\nattribute_deprecated\nvoid          av_frame_set_metadata       (AVFrame *frame, AVDictionary *val);\nattribute_deprecated\nint     av_frame_get_decode_error_flags   (const AVFrame *frame);\nattribute_deprecated\nvoid    av_frame_set_decode_error_flags   (AVFrame *frame, int     val);\nattribute_deprecated\nint     av_frame_get_pkt_size(const AVFrame *frame);\nattribute_deprecated\nvoid    av_frame_set_pkt_size(AVFrame *frame, int val);\n#if FF_API_FRAME_QP\nattribute_deprecated\nint8_t *av_frame_get_qp_table(AVFrame *f, int *stride, int *type);\nattribute_deprecated\nint av_frame_set_qp_table(AVFrame *f, AVBufferRef *buf, int stride, int type);\n#endif\nattribute_deprecated\nenum AVColorSpace av_frame_get_colorspace(const AVFrame *frame);\nattribute_deprecated\nvoid    av_frame_set_colorspace(AVFrame *frame, enum AVColorSpace val);\nattribute_deprecated\nenum AVColorRange av_frame_get_color_range(const AVFrame *frame);\nattribute_deprecated\nvoid    av_frame_set_color_range(AVFrame *frame, enum AVColorRange val);\n#endif\n\n/**\n * Get the name of a colorspace.\n * @return a static string identifying the colorspace; can be NULL.\n */\nconst char *av_get_colorspace_name(enum AVColorSpace val);\n\n/**\n * Allocate an AVFrame and set its fields to default values.  The resulting\n * struct must be freed using av_frame_free().\n *\n * @return An AVFrame filled with default values or NULL on failure.\n *\n * @note this only allocates the AVFrame itself, not the data buffers. Those\n * must be allocated through other means, e.g. with av_frame_get_buffer() or\n * manually.\n */\nAVFrame *av_frame_alloc(void);\n\n/**\n * Free the frame and any dynamically allocated objects in it,\n * e.g. extended_data. If the frame is reference counted, it will be\n * unreferenced first.\n *\n * @param frame frame to be freed. The pointer will be set to NULL.\n */\nvoid av_frame_free(AVFrame **frame);\n\n/**\n * Set up a new reference to the data described by the source frame.\n *\n * Copy frame properties from src to dst and create a new reference for each\n * AVBufferRef from src.\n *\n * If src is not reference counted, new buffers are allocated and the data is\n * copied.\n *\n * @warning: dst MUST have been either unreferenced with av_frame_unref(dst),\n *           or newly allocated with av_frame_alloc() before calling this\n *           function, or undefined behavior will occur.\n *\n * @return 0 on success, a negative AVERROR on error\n */\nint av_frame_ref(AVFrame *dst, const AVFrame *src);\n\n/**\n * Create a new frame that references the same data as src.\n *\n * This is a shortcut for av_frame_alloc()+av_frame_ref().\n *\n * @return newly created AVFrame on success, NULL on error.\n */\nAVFrame *av_frame_clone(const AVFrame *src);\n\n/**\n * Unreference all the buffers referenced by frame and reset the frame fields.\n */\nvoid av_frame_unref(AVFrame *frame);\n\n/**\n * Move everything contained in src to dst and reset src.\n *\n * @warning: dst is not unreferenced, but directly overwritten without reading\n *           or deallocating its contents. Call av_frame_unref(dst) manually\n *           before calling this function to ensure that no memory is leaked.\n */\nvoid av_frame_move_ref(AVFrame *dst, AVFrame *src);\n\n/**\n * Allocate new buffer(s) for audio or video data.\n *\n * The following fields must be set on frame before calling this function:\n * - format (pixel format for video, sample format for audio)\n * - width and height for video\n * - nb_samples and channel_layout for audio\n *\n * This function will fill AVFrame.data and AVFrame.buf arrays and, if\n * necessary, allocate and fill AVFrame.extended_data and AVFrame.extended_buf.\n * For planar formats, one buffer will be allocated for each plane.\n *\n * @warning: if frame already has been allocated, calling this function will\n *           leak memory. In addition, undefined behavior can occur in certain\n *           cases.\n *\n * @param frame frame in which to store the new buffers.\n * @param align Required buffer size alignment. If equal to 0, alignment will be\n *              chosen automatically for the current CPU. It is highly\n *              recommended to pass 0 here unless you know what you are doing.\n *\n * @return 0 on success, a negative AVERROR on error.\n */\nint av_frame_get_buffer(AVFrame *frame, int align);\n\n/**\n * Check if the frame data is writable.\n *\n * @return A positive value if the frame data is writable (which is true if and\n * only if each of the underlying buffers has only one reference, namely the one\n * stored in this frame). Return 0 otherwise.\n *\n * If 1 is returned the answer is valid until av_buffer_ref() is called on any\n * of the underlying AVBufferRefs (e.g. through av_frame_ref() or directly).\n *\n * @see av_frame_make_writable(), av_buffer_is_writable()\n */\nint av_frame_is_writable(AVFrame *frame);\n\n/**\n * Ensure that the frame data is writable, avoiding data copy if possible.\n *\n * Do nothing if the frame is writable, allocate new buffers and copy the data\n * if it is not.\n *\n * @return 0 on success, a negative AVERROR on error.\n *\n * @see av_frame_is_writable(), av_buffer_is_writable(),\n * av_buffer_make_writable()\n */\nint av_frame_make_writable(AVFrame *frame);\n\n/**\n * Copy the frame data from src to dst.\n *\n * This function does not allocate anything, dst must be already initialized and\n * allocated with the same parameters as src.\n *\n * This function only copies the frame data (i.e. the contents of the data /\n * extended data arrays), not any other properties.\n *\n * @return >= 0 on success, a negative AVERROR on error.\n */\nint av_frame_copy(AVFrame *dst, const AVFrame *src);\n\n/**\n * Copy only \"metadata\" fields from src to dst.\n *\n * Metadata for the purpose of this function are those fields that do not affect\n * the data layout in the buffers.  E.g. pts, sample rate (for audio) or sample\n * aspect ratio (for video), but not width/height or channel layout.\n * Side data is also copied.\n */\nint av_frame_copy_props(AVFrame *dst, const AVFrame *src);\n\n/**\n * Get the buffer reference a given data plane is stored in.\n *\n * @param plane index of the data plane of interest in frame->extended_data.\n *\n * @return the buffer reference that contains the plane or NULL if the input\n * frame is not valid.\n */\nAVBufferRef *av_frame_get_plane_buffer(AVFrame *frame, int plane);\n\n/**\n * Add a new side data to a frame.\n *\n * @param frame a frame to which the side data should be added\n * @param type type of the added side data\n * @param size size of the side data\n *\n * @return newly added side data on success, NULL on error\n */\nAVFrameSideData *av_frame_new_side_data(AVFrame *frame,\n                                        enum AVFrameSideDataType type,\n                                        int size);\n\n/**\n * Add a new side data to a frame from an existing AVBufferRef\n *\n * @param frame a frame to which the side data should be added\n * @param type  the type of the added side data\n * @param buf   an AVBufferRef to add as side data. The ownership of\n *              the reference is transferred to the frame.\n *\n * @return newly added side data on success, NULL on error. On failure\n *         the frame is unchanged and the AVBufferRef remains owned by\n *         the caller.\n */\nAVFrameSideData *av_frame_new_side_data_from_buf(AVFrame *frame,\n                                                 enum AVFrameSideDataType type,\n                                                 AVBufferRef *buf);\n\n/**\n * @return a pointer to the side data of a given type on success, NULL if there\n * is no side data with such type in this frame.\n */\nAVFrameSideData *av_frame_get_side_data(const AVFrame *frame,\n                                        enum AVFrameSideDataType type);\n\n/**\n * If side data of the supplied type exists in the frame, free it and remove it\n * from the frame.\n */\nvoid av_frame_remove_side_data(AVFrame *frame, enum AVFrameSideDataType type);\n\n\n/**\n * Flags for frame cropping.\n */\nenum {\n    /**\n     * Apply the maximum possible cropping, even if it requires setting the\n     * AVFrame.data[] entries to unaligned pointers. Passing unaligned data\n     * to FFmpeg API is generally not allowed, and causes undefined behavior\n     * (such as crashes). You can pass unaligned data only to FFmpeg APIs that\n     * are explicitly documented to accept it. Use this flag only if you\n     * absolutely know what you are doing.\n     */\n    AV_FRAME_CROP_UNALIGNED     = 1 << 0,\n};\n\n/**\n * Crop the given video AVFrame according to its crop_left/crop_top/crop_right/\n * crop_bottom fields. If cropping is successful, the function will adjust the\n * data pointers and the width/height fields, and set the crop fields to 0.\n *\n * In all cases, the cropping boundaries will be rounded to the inherent\n * alignment of the pixel format. In some cases, such as for opaque hwaccel\n * formats, the left/top cropping is ignored. The crop fields are set to 0 even\n * if the cropping was rounded or ignored.\n *\n * @param frame the frame which should be cropped\n * @param flags Some combination of AV_FRAME_CROP_* flags, or 0.\n *\n * @return >= 0 on success, a negative AVERROR on error. If the cropping fields\n * were invalid, AVERROR(ERANGE) is returned, and nothing is changed.\n */\nint av_frame_apply_cropping(AVFrame *frame, int flags);\n\n/**\n * @return a string identifying the side data type\n */\nconst char *av_frame_side_data_name(enum AVFrameSideDataType type);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_FRAME_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/hash.h",
    "content": "/*\n * Copyright (C) 2013 Reimar Döffinger <Reimar.Doeffinger@gmx.de>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * @ingroup lavu_hash_generic\n * Generic hashing API\n */\n\n#ifndef AVUTIL_HASH_H\n#define AVUTIL_HASH_H\n\n#include <stdint.h>\n\n#include \"version.h\"\n\n/**\n * @defgroup lavu_hash Hash Functions\n * @ingroup lavu_crypto\n * Hash functions useful in multimedia.\n *\n * Hash functions are widely used in multimedia, from error checking and\n * concealment to internal regression testing. libavutil has efficient\n * implementations of a variety of hash functions that may be useful for\n * FFmpeg and other multimedia applications.\n *\n * @{\n *\n * @defgroup lavu_hash_generic Generic Hashing API\n * An abstraction layer for all hash functions supported by libavutil.\n *\n * If your application needs to support a wide range of different hash\n * functions, then the Generic Hashing API is for you. It provides a generic,\n * reusable API for @ref lavu_hash \"all hash functions\" implemented in libavutil.\n * If you just need to use one particular hash function, use the @ref lavu_hash\n * \"individual hash\" directly.\n *\n * @section Sample Code\n *\n * A basic template for using the Generic Hashing API follows:\n *\n * @code\n * struct AVHashContext *ctx = NULL;\n * const char *hash_name = NULL;\n * uint8_t *output_buf = NULL;\n *\n * // Select from a string returned by av_hash_names()\n * hash_name = ...;\n *\n * // Allocate a hash context\n * ret = av_hash_alloc(&ctx, hash_name);\n * if (ret < 0)\n *     return ret;\n *\n * // Initialize the hash context\n * av_hash_init(ctx);\n *\n * // Update the hash context with data\n * while (data_left) {\n *     av_hash_update(ctx, data, size);\n * }\n *\n * // Now we have no more data, so it is time to finalize the hash and get the\n * // output. But we need to first allocate an output buffer. Note that you can\n * // use any memory allocation function, including malloc(), not just\n * // av_malloc().\n * output_buf = av_malloc(av_hash_get_size(ctx));\n * if (!output_buf)\n *     return AVERROR(ENOMEM);\n *\n * // Finalize the hash context.\n * // You can use any of the av_hash_final*() functions provided, for other\n * // output formats. If you do so, be sure to adjust the memory allocation\n * // above. See the function documentation below for the exact amount of extra\n * // memory needed.\n * av_hash_final(ctx, output_buffer);\n *\n * // Free the context\n * av_hash_freep(&ctx);\n * @endcode\n *\n * @section Hash Function-Specific Information\n * If the CRC32 hash is selected, the #AV_CRC_32_IEEE polynomial will be\n * used.\n *\n * If the Murmur3 hash is selected, the default seed will be used. See @ref\n * lavu_murmur3_seedinfo \"Murmur3\" for more information.\n *\n * @{\n */\n\n/**\n * @example ffhash.c\n * This example is a simple command line application that takes one or more\n * arguments. It demonstrates a typical use of the hashing API with allocation,\n * initialization, updating, and finalizing.\n */\n\nstruct AVHashContext;\n\n/**\n * Allocate a hash context for the algorithm specified by name.\n *\n * @return  >= 0 for success, a negative error code for failure\n *\n * @note The context is not initialized after a call to this function; you must\n * call av_hash_init() to do so.\n */\nint av_hash_alloc(struct AVHashContext **ctx, const char *name);\n\n/**\n * Get the names of available hash algorithms.\n *\n * This function can be used to enumerate the algorithms.\n *\n * @param[in] i  Index of the hash algorithm, starting from 0\n * @return       Pointer to a static string or `NULL` if `i` is out of range\n */\nconst char *av_hash_names(int i);\n\n/**\n * Get the name of the algorithm corresponding to the given hash context.\n */\nconst char *av_hash_get_name(const struct AVHashContext *ctx);\n\n/**\n * Maximum value that av_hash_get_size() will currently return.\n *\n * You can use this if you absolutely want or need to use static allocation for\n * the output buffer and are fine with not supporting hashes newly added to\n * libavutil without recompilation.\n *\n * @warning\n * Adding new hashes with larger sizes, and increasing the macro while doing\n * so, will not be considered an ABI change. To prevent your code from\n * overflowing a buffer, either dynamically allocate the output buffer with\n * av_hash_get_size(), or limit your use of the Hashing API to hashes that are\n * already in FFmpeg during the time of compilation.\n */\n#define AV_HASH_MAX_SIZE 64\n\n/**\n * Get the size of the resulting hash value in bytes.\n *\n * The maximum value this function will currently return is available as macro\n * #AV_HASH_MAX_SIZE.\n *\n * @param[in]     ctx Hash context\n * @return            Size of the hash value in bytes\n */\nint av_hash_get_size(const struct AVHashContext *ctx);\n\n/**\n * Initialize or reset a hash context.\n *\n * @param[in,out] ctx Hash context\n */\nvoid av_hash_init(struct AVHashContext *ctx);\n\n/**\n * Update a hash context with additional data.\n *\n * @param[in,out] ctx Hash context\n * @param[in]     src Data to be added to the hash context\n * @param[in]     len Size of the additional data\n */\n#if FF_API_CRYPTO_SIZE_T\nvoid av_hash_update(struct AVHashContext *ctx, const uint8_t *src, int len);\n#else\nvoid av_hash_update(struct AVHashContext *ctx, const uint8_t *src, size_t len);\n#endif\n\n/**\n * Finalize a hash context and compute the actual hash value.\n *\n * The minimum size of `dst` buffer is given by av_hash_get_size() or\n * #AV_HASH_MAX_SIZE. The use of the latter macro is discouraged.\n *\n * It is not safe to update or finalize a hash context again, if it has already\n * been finalized.\n *\n * @param[in,out] ctx Hash context\n * @param[out]    dst Where the final hash value will be stored\n *\n * @see av_hash_final_bin() provides an alternative API\n */\nvoid av_hash_final(struct AVHashContext *ctx, uint8_t *dst);\n\n/**\n * Finalize a hash context and store the actual hash value in a buffer.\n *\n * It is not safe to update or finalize a hash context again, if it has already\n * been finalized.\n *\n * If `size` is smaller than the hash size (given by av_hash_get_size()), the\n * hash is truncated; if size is larger, the buffer is padded with 0.\n *\n * @param[in,out] ctx  Hash context\n * @param[out]    dst  Where the final hash value will be stored\n * @param[in]     size Number of bytes to write to `dst`\n */\nvoid av_hash_final_bin(struct AVHashContext *ctx, uint8_t *dst, int size);\n\n/**\n * Finalize a hash context and store the hexadecimal representation of the\n * actual hash value as a string.\n *\n * It is not safe to update or finalize a hash context again, if it has already\n * been finalized.\n *\n * The string is always 0-terminated.\n *\n * If `size` is smaller than `2 * hash_size + 1`, where `hash_size` is the\n * value returned by av_hash_get_size(), the string will be truncated.\n *\n * @param[in,out] ctx  Hash context\n * @param[out]    dst  Where the string will be stored\n * @param[in]     size Maximum number of bytes to write to `dst`\n */\nvoid av_hash_final_hex(struct AVHashContext *ctx, uint8_t *dst, int size);\n\n/**\n * Finalize a hash context and store the Base64 representation of the\n * actual hash value as a string.\n *\n * It is not safe to update or finalize a hash context again, if it has already\n * been finalized.\n *\n * The string is always 0-terminated.\n *\n * If `size` is smaller than AV_BASE64_SIZE(hash_size), where `hash_size` is\n * the value returned by av_hash_get_size(), the string will be truncated.\n *\n * @param[in,out] ctx  Hash context\n * @param[out]    dst  Where the final hash value will be stored\n * @param[in]     size Maximum number of bytes to write to `dst`\n */\nvoid av_hash_final_b64(struct AVHashContext *ctx, uint8_t *dst, int size);\n\n/**\n * Free hash context and set hash context pointer to `NULL`.\n *\n * @param[in,out] ctx  Pointer to hash context\n */\nvoid av_hash_freep(struct AVHashContext **ctx);\n\n/**\n * @}\n * @}\n */\n\n#endif /* AVUTIL_HASH_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/hmac.h",
    "content": "/*\n * Copyright (C) 2012 Martin Storsjo\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_HMAC_H\n#define AVUTIL_HMAC_H\n\n#include <stdint.h>\n\n#include \"version.h\"\n/**\n * @defgroup lavu_hmac HMAC\n * @ingroup lavu_crypto\n * @{\n */\n\nenum AVHMACType {\n    AV_HMAC_MD5,\n    AV_HMAC_SHA1,\n    AV_HMAC_SHA224,\n    AV_HMAC_SHA256,\n    AV_HMAC_SHA384,\n    AV_HMAC_SHA512,\n};\n\ntypedef struct AVHMAC AVHMAC;\n\n/**\n * Allocate an AVHMAC context.\n * @param type The hash function used for the HMAC.\n */\nAVHMAC *av_hmac_alloc(enum AVHMACType type);\n\n/**\n * Free an AVHMAC context.\n * @param ctx The context to free, may be NULL\n */\nvoid av_hmac_free(AVHMAC *ctx);\n\n/**\n * Initialize an AVHMAC context with an authentication key.\n * @param ctx    The HMAC context\n * @param key    The authentication key\n * @param keylen The length of the key, in bytes\n */\nvoid av_hmac_init(AVHMAC *ctx, const uint8_t *key, unsigned int keylen);\n\n/**\n * Hash data with the HMAC.\n * @param ctx  The HMAC context\n * @param data The data to hash\n * @param len  The length of the data, in bytes\n */\nvoid av_hmac_update(AVHMAC *ctx, const uint8_t *data, unsigned int len);\n\n/**\n * Finish hashing and output the HMAC digest.\n * @param ctx    The HMAC context\n * @param out    The output buffer to write the digest into\n * @param outlen The length of the out buffer, in bytes\n * @return       The number of bytes written to out, or a negative error code.\n */\nint av_hmac_final(AVHMAC *ctx, uint8_t *out, unsigned int outlen);\n\n/**\n * Hash an array of data with a key.\n * @param ctx    The HMAC context\n * @param data   The data to hash\n * @param len    The length of the data, in bytes\n * @param key    The authentication key\n * @param keylen The length of the key, in bytes\n * @param out    The output buffer to write the digest into\n * @param outlen The length of the out buffer, in bytes\n * @return       The number of bytes written to out, or a negative error code.\n */\nint av_hmac_calc(AVHMAC *ctx, const uint8_t *data, unsigned int len,\n                 const uint8_t *key, unsigned int keylen,\n                 uint8_t *out, unsigned int outlen);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_HMAC_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/hwcontext.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_HWCONTEXT_H\n#define AVUTIL_HWCONTEXT_H\n\n#include \"buffer.h\"\n#include \"frame.h\"\n#include \"log.h\"\n#include \"pixfmt.h\"\n\nenum AVHWDeviceType {\n    AV_HWDEVICE_TYPE_NONE,\n    AV_HWDEVICE_TYPE_VDPAU,\n    AV_HWDEVICE_TYPE_CUDA,\n    AV_HWDEVICE_TYPE_VAAPI,\n    AV_HWDEVICE_TYPE_DXVA2,\n    AV_HWDEVICE_TYPE_QSV,\n    AV_HWDEVICE_TYPE_VIDEOTOOLBOX,\n    AV_HWDEVICE_TYPE_D3D11VA,\n    AV_HWDEVICE_TYPE_DRM,\n    AV_HWDEVICE_TYPE_OPENCL,\n    AV_HWDEVICE_TYPE_MEDIACODEC,\n};\n\ntypedef struct AVHWDeviceInternal AVHWDeviceInternal;\n\n/**\n * This struct aggregates all the (hardware/vendor-specific) \"high-level\" state,\n * i.e. state that is not tied to a concrete processing configuration.\n * E.g., in an API that supports hardware-accelerated encoding and decoding,\n * this struct will (if possible) wrap the state that is common to both encoding\n * and decoding and from which specific instances of encoders or decoders can be\n * derived.\n *\n * This struct is reference-counted with the AVBuffer mechanism. The\n * av_hwdevice_ctx_alloc() constructor yields a reference, whose data field\n * points to the actual AVHWDeviceContext. Further objects derived from\n * AVHWDeviceContext (such as AVHWFramesContext, describing a frame pool with\n * specific properties) will hold an internal reference to it. After all the\n * references are released, the AVHWDeviceContext itself will be freed,\n * optionally invoking a user-specified callback for uninitializing the hardware\n * state.\n */\ntypedef struct AVHWDeviceContext {\n    /**\n     * A class for logging. Set by av_hwdevice_ctx_alloc().\n     */\n    const AVClass *av_class;\n\n    /**\n     * Private data used internally by libavutil. Must not be accessed in any\n     * way by the caller.\n     */\n    AVHWDeviceInternal *internal;\n\n    /**\n     * This field identifies the underlying API used for hardware access.\n     *\n     * This field is set when this struct is allocated and never changed\n     * afterwards.\n     */\n    enum AVHWDeviceType type;\n\n    /**\n     * The format-specific data, allocated and freed by libavutil along with\n     * this context.\n     *\n     * Should be cast by the user to the format-specific context defined in the\n     * corresponding header (hwcontext_*.h) and filled as described in the\n     * documentation before calling av_hwdevice_ctx_init().\n     *\n     * After calling av_hwdevice_ctx_init() this struct should not be modified\n     * by the caller.\n     */\n    void *hwctx;\n\n    /**\n     * This field may be set by the caller before calling av_hwdevice_ctx_init().\n     *\n     * If non-NULL, this callback will be called when the last reference to\n     * this context is unreferenced, immediately before it is freed.\n     *\n     * @note when other objects (e.g an AVHWFramesContext) are derived from this\n     *       struct, this callback will be invoked after all such child objects\n     *       are fully uninitialized and their respective destructors invoked.\n     */\n    void (*free)(struct AVHWDeviceContext *ctx);\n\n    /**\n     * Arbitrary user data, to be used e.g. by the free() callback.\n     */\n    void *user_opaque;\n} AVHWDeviceContext;\n\ntypedef struct AVHWFramesInternal AVHWFramesInternal;\n\n/**\n * This struct describes a set or pool of \"hardware\" frames (i.e. those with\n * data not located in normal system memory). All the frames in the pool are\n * assumed to be allocated in the same way and interchangeable.\n *\n * This struct is reference-counted with the AVBuffer mechanism and tied to a\n * given AVHWDeviceContext instance. The av_hwframe_ctx_alloc() constructor\n * yields a reference, whose data field points to the actual AVHWFramesContext\n * struct.\n */\ntypedef struct AVHWFramesContext {\n    /**\n     * A class for logging.\n     */\n    const AVClass *av_class;\n\n    /**\n     * Private data used internally by libavutil. Must not be accessed in any\n     * way by the caller.\n     */\n    AVHWFramesInternal *internal;\n\n    /**\n     * A reference to the parent AVHWDeviceContext. This reference is owned and\n     * managed by the enclosing AVHWFramesContext, but the caller may derive\n     * additional references from it.\n     */\n    AVBufferRef *device_ref;\n\n    /**\n     * The parent AVHWDeviceContext. This is simply a pointer to\n     * device_ref->data provided for convenience.\n     *\n     * Set by libavutil in av_hwframe_ctx_init().\n     */\n    AVHWDeviceContext *device_ctx;\n\n    /**\n     * The format-specific data, allocated and freed automatically along with\n     * this context.\n     *\n     * Should be cast by the user to the format-specific context defined in the\n     * corresponding header (hwframe_*.h) and filled as described in the\n     * documentation before calling av_hwframe_ctx_init().\n     *\n     * After any frames using this context are created, the contents of this\n     * struct should not be modified by the caller.\n     */\n    void *hwctx;\n\n    /**\n     * This field may be set by the caller before calling av_hwframe_ctx_init().\n     *\n     * If non-NULL, this callback will be called when the last reference to\n     * this context is unreferenced, immediately before it is freed.\n     */\n    void (*free)(struct AVHWFramesContext *ctx);\n\n    /**\n     * Arbitrary user data, to be used e.g. by the free() callback.\n     */\n    void *user_opaque;\n\n    /**\n     * A pool from which the frames are allocated by av_hwframe_get_buffer().\n     * This field may be set by the caller before calling av_hwframe_ctx_init().\n     * The buffers returned by calling av_buffer_pool_get() on this pool must\n     * have the properties described in the documentation in the corresponding hw\n     * type's header (hwcontext_*.h). The pool will be freed strictly before\n     * this struct's free() callback is invoked.\n     *\n     * This field may be NULL, then libavutil will attempt to allocate a pool\n     * internally. Note that certain device types enforce pools allocated at\n     * fixed size (frame count), which cannot be extended dynamically. In such a\n     * case, initial_pool_size must be set appropriately.\n     */\n    AVBufferPool *pool;\n\n    /**\n     * Initial size of the frame pool. If a device type does not support\n     * dynamically resizing the pool, then this is also the maximum pool size.\n     *\n     * May be set by the caller before calling av_hwframe_ctx_init(). Must be\n     * set if pool is NULL and the device type does not support dynamic pools.\n     */\n    int initial_pool_size;\n\n    /**\n     * The pixel format identifying the underlying HW surface type.\n     *\n     * Must be a hwaccel format, i.e. the corresponding descriptor must have the\n     * AV_PIX_FMT_FLAG_HWACCEL flag set.\n     *\n     * Must be set by the user before calling av_hwframe_ctx_init().\n     */\n    enum AVPixelFormat format;\n\n    /**\n     * The pixel format identifying the actual data layout of the hardware\n     * frames.\n     *\n     * Must be set by the caller before calling av_hwframe_ctx_init().\n     *\n     * @note when the underlying API does not provide the exact data layout, but\n     * only the colorspace/bit depth, this field should be set to the fully\n     * planar version of that format (e.g. for 8-bit 420 YUV it should be\n     * AV_PIX_FMT_YUV420P, not AV_PIX_FMT_NV12 or anything else).\n     */\n    enum AVPixelFormat sw_format;\n\n    /**\n     * The allocated dimensions of the frames in this pool.\n     *\n     * Must be set by the user before calling av_hwframe_ctx_init().\n     */\n    int width, height;\n} AVHWFramesContext;\n\n/**\n * Look up an AVHWDeviceType by name.\n *\n * @param name String name of the device type (case-insensitive).\n * @return The type from enum AVHWDeviceType, or AV_HWDEVICE_TYPE_NONE if\n *         not found.\n */\nenum AVHWDeviceType av_hwdevice_find_type_by_name(const char *name);\n\n/** Get the string name of an AVHWDeviceType.\n *\n * @param type Type from enum AVHWDeviceType.\n * @return Pointer to a static string containing the name, or NULL if the type\n *         is not valid.\n */\nconst char *av_hwdevice_get_type_name(enum AVHWDeviceType type);\n\n/**\n * Iterate over supported device types.\n *\n * @param type AV_HWDEVICE_TYPE_NONE initially, then the previous type\n *             returned by this function in subsequent iterations.\n * @return The next usable device type from enum AVHWDeviceType, or\n *         AV_HWDEVICE_TYPE_NONE if there are no more.\n */\nenum AVHWDeviceType av_hwdevice_iterate_types(enum AVHWDeviceType prev);\n\n/**\n * Allocate an AVHWDeviceContext for a given hardware type.\n *\n * @param type the type of the hardware device to allocate.\n * @return a reference to the newly created AVHWDeviceContext on success or NULL\n *         on failure.\n */\nAVBufferRef *av_hwdevice_ctx_alloc(enum AVHWDeviceType type);\n\n/**\n * Finalize the device context before use. This function must be called after\n * the context is filled with all the required information and before it is\n * used in any way.\n *\n * @param ref a reference to the AVHWDeviceContext\n * @return 0 on success, a negative AVERROR code on failure\n */\nint av_hwdevice_ctx_init(AVBufferRef *ref);\n\n/**\n * Open a device of the specified type and create an AVHWDeviceContext for it.\n *\n * This is a convenience function intended to cover the simple cases. Callers\n * who need to fine-tune device creation/management should open the device\n * manually and then wrap it in an AVHWDeviceContext using\n * av_hwdevice_ctx_alloc()/av_hwdevice_ctx_init().\n *\n * The returned context is already initialized and ready for use, the caller\n * should not call av_hwdevice_ctx_init() on it. The user_opaque/free fields of\n * the created AVHWDeviceContext are set by this function and should not be\n * touched by the caller.\n *\n * @param device_ctx On success, a reference to the newly-created device context\n *                   will be written here. The reference is owned by the caller\n *                   and must be released with av_buffer_unref() when no longer\n *                   needed. On failure, NULL will be written to this pointer.\n * @param type The type of the device to create.\n * @param device A type-specific string identifying the device to open.\n * @param opts A dictionary of additional (type-specific) options to use in\n *             opening the device. The dictionary remains owned by the caller.\n * @param flags currently unused\n *\n * @return 0 on success, a negative AVERROR code on failure.\n */\nint av_hwdevice_ctx_create(AVBufferRef **device_ctx, enum AVHWDeviceType type,\n                           const char *device, AVDictionary *opts, int flags);\n\n/**\n * Create a new device of the specified type from an existing device.\n *\n * If the source device is a device of the target type or was originally\n * derived from such a device (possibly through one or more intermediate\n * devices of other types), then this will return a reference to the\n * existing device of the same type as is requested.\n *\n * Otherwise, it will attempt to derive a new device from the given source\n * device.  If direct derivation to the new type is not implemented, it will\n * attempt the same derivation from each ancestor of the source device in\n * turn looking for an implemented derivation method.\n *\n * @param dst_ctx On success, a reference to the newly-created\n *                AVHWDeviceContext.\n * @param type    The type of the new device to create.\n * @param src_ctx A reference to an existing AVHWDeviceContext which will be\n *                used to create the new device.\n * @param flags   Currently unused; should be set to zero.\n * @return        Zero on success, a negative AVERROR code on failure.\n */\nint av_hwdevice_ctx_create_derived(AVBufferRef **dst_ctx,\n                                   enum AVHWDeviceType type,\n                                   AVBufferRef *src_ctx, int flags);\n\n\n/**\n * Allocate an AVHWFramesContext tied to a given device context.\n *\n * @param device_ctx a reference to a AVHWDeviceContext. This function will make\n *                   a new reference for internal use, the one passed to the\n *                   function remains owned by the caller.\n * @return a reference to the newly created AVHWFramesContext on success or NULL\n *         on failure.\n */\nAVBufferRef *av_hwframe_ctx_alloc(AVBufferRef *device_ctx);\n\n/**\n * Finalize the context before use. This function must be called after the\n * context is filled with all the required information and before it is attached\n * to any frames.\n *\n * @param ref a reference to the AVHWFramesContext\n * @return 0 on success, a negative AVERROR code on failure\n */\nint av_hwframe_ctx_init(AVBufferRef *ref);\n\n/**\n * Allocate a new frame attached to the given AVHWFramesContext.\n *\n * @param hwframe_ctx a reference to an AVHWFramesContext\n * @param frame an empty (freshly allocated or unreffed) frame to be filled with\n *              newly allocated buffers.\n * @param flags currently unused, should be set to zero\n * @return 0 on success, a negative AVERROR code on failure\n */\nint av_hwframe_get_buffer(AVBufferRef *hwframe_ctx, AVFrame *frame, int flags);\n\n/**\n * Copy data to or from a hw surface. At least one of dst/src must have an\n * AVHWFramesContext attached.\n *\n * If src has an AVHWFramesContext attached, then the format of dst (if set)\n * must use one of the formats returned by av_hwframe_transfer_get_formats(src,\n * AV_HWFRAME_TRANSFER_DIRECTION_FROM).\n * If dst has an AVHWFramesContext attached, then the format of src must use one\n * of the formats returned by av_hwframe_transfer_get_formats(dst,\n * AV_HWFRAME_TRANSFER_DIRECTION_TO)\n *\n * dst may be \"clean\" (i.e. with data/buf pointers unset), in which case the\n * data buffers will be allocated by this function using av_frame_get_buffer().\n * If dst->format is set, then this format will be used, otherwise (when\n * dst->format is AV_PIX_FMT_NONE) the first acceptable format will be chosen.\n *\n * The two frames must have matching allocated dimensions (i.e. equal to\n * AVHWFramesContext.width/height), since not all device types support\n * transferring a sub-rectangle of the whole surface. The display dimensions\n * (i.e. AVFrame.width/height) may be smaller than the allocated dimensions, but\n * also have to be equal for both frames. When the display dimensions are\n * smaller than the allocated dimensions, the content of the padding in the\n * destination frame is unspecified.\n *\n * @param dst the destination frame. dst is not touched on failure.\n * @param src the source frame.\n * @param flags currently unused, should be set to zero\n * @return 0 on success, a negative AVERROR error code on failure.\n */\nint av_hwframe_transfer_data(AVFrame *dst, const AVFrame *src, int flags);\n\nenum AVHWFrameTransferDirection {\n    /**\n     * Transfer the data from the queried hw frame.\n     */\n    AV_HWFRAME_TRANSFER_DIRECTION_FROM,\n\n    /**\n     * Transfer the data to the queried hw frame.\n     */\n    AV_HWFRAME_TRANSFER_DIRECTION_TO,\n};\n\n/**\n * Get a list of possible source or target formats usable in\n * av_hwframe_transfer_data().\n *\n * @param hwframe_ctx the frame context to obtain the information for\n * @param dir the direction of the transfer\n * @param formats the pointer to the output format list will be written here.\n *                The list is terminated with AV_PIX_FMT_NONE and must be freed\n *                by the caller when no longer needed using av_free().\n *                If this function returns successfully, the format list will\n *                have at least one item (not counting the terminator).\n *                On failure, the contents of this pointer are unspecified.\n * @param flags currently unused, should be set to zero\n * @return 0 on success, a negative AVERROR code on failure.\n */\nint av_hwframe_transfer_get_formats(AVBufferRef *hwframe_ctx,\n                                    enum AVHWFrameTransferDirection dir,\n                                    enum AVPixelFormat **formats, int flags);\n\n\n/**\n * This struct describes the constraints on hardware frames attached to\n * a given device with a hardware-specific configuration.  This is returned\n * by av_hwdevice_get_hwframe_constraints() and must be freed by\n * av_hwframe_constraints_free() after use.\n */\ntypedef struct AVHWFramesConstraints {\n    /**\n     * A list of possible values for format in the hw_frames_ctx,\n     * terminated by AV_PIX_FMT_NONE.  This member will always be filled.\n     */\n    enum AVPixelFormat *valid_hw_formats;\n\n    /**\n     * A list of possible values for sw_format in the hw_frames_ctx,\n     * terminated by AV_PIX_FMT_NONE.  Can be NULL if this information is\n     * not known.\n     */\n    enum AVPixelFormat *valid_sw_formats;\n\n    /**\n     * The minimum size of frames in this hw_frames_ctx.\n     * (Zero if not known.)\n     */\n    int min_width;\n    int min_height;\n\n    /**\n     * The maximum size of frames in this hw_frames_ctx.\n     * (INT_MAX if not known / no limit.)\n     */\n    int max_width;\n    int max_height;\n} AVHWFramesConstraints;\n\n/**\n * Allocate a HW-specific configuration structure for a given HW device.\n * After use, the user must free all members as required by the specific\n * hardware structure being used, then free the structure itself with\n * av_free().\n *\n * @param device_ctx a reference to the associated AVHWDeviceContext.\n * @return The newly created HW-specific configuration structure on\n *         success or NULL on failure.\n */\nvoid *av_hwdevice_hwconfig_alloc(AVBufferRef *device_ctx);\n\n/**\n * Get the constraints on HW frames given a device and the HW-specific\n * configuration to be used with that device.  If no HW-specific\n * configuration is provided, returns the maximum possible capabilities\n * of the device.\n *\n * @param ref a reference to the associated AVHWDeviceContext.\n * @param hwconfig a filled HW-specific configuration structure, or NULL\n *        to return the maximum possible capabilities of the device.\n * @return AVHWFramesConstraints structure describing the constraints\n *         on the device, or NULL if not available.\n */\nAVHWFramesConstraints *av_hwdevice_get_hwframe_constraints(AVBufferRef *ref,\n                                                           const void *hwconfig);\n\n/**\n * Free an AVHWFrameConstraints structure.\n *\n * @param constraints The (filled or unfilled) AVHWFrameConstraints structure.\n */\nvoid av_hwframe_constraints_free(AVHWFramesConstraints **constraints);\n\n\n/**\n * Flags to apply to frame mappings.\n */\nenum {\n    /**\n     * The mapping must be readable.\n     */\n    AV_HWFRAME_MAP_READ      = 1 << 0,\n    /**\n     * The mapping must be writeable.\n     */\n    AV_HWFRAME_MAP_WRITE     = 1 << 1,\n    /**\n     * The mapped frame will be overwritten completely in subsequent\n     * operations, so the current frame data need not be loaded.  Any values\n     * which are not overwritten are unspecified.\n     */\n    AV_HWFRAME_MAP_OVERWRITE = 1 << 2,\n    /**\n     * The mapping must be direct.  That is, there must not be any copying in\n     * the map or unmap steps.  Note that performance of direct mappings may\n     * be much lower than normal memory.\n     */\n    AV_HWFRAME_MAP_DIRECT    = 1 << 3,\n};\n\n/**\n * Map a hardware frame.\n *\n * This has a number of different possible effects, depending on the format\n * and origin of the src and dst frames.  On input, src should be a usable\n * frame with valid buffers and dst should be blank (typically as just created\n * by av_frame_alloc()).  src should have an associated hwframe context, and\n * dst may optionally have a format and associated hwframe context.\n *\n * If src was created by mapping a frame from the hwframe context of dst,\n * then this function undoes the mapping - dst is replaced by a reference to\n * the frame that src was originally mapped from.\n *\n * If both src and dst have an associated hwframe context, then this function\n * attempts to map the src frame from its hardware context to that of dst and\n * then fill dst with appropriate data to be usable there.  This will only be\n * possible if the hwframe contexts and associated devices are compatible -\n * given compatible devices, av_hwframe_ctx_create_derived() can be used to\n * create a hwframe context for dst in which mapping should be possible.\n *\n * If src has a hwframe context but dst does not, then the src frame is\n * mapped to normal memory and should thereafter be usable as a normal frame.\n * If the format is set on dst, then the mapping will attempt to create dst\n * with that format and fail if it is not possible.  If format is unset (is\n * AV_PIX_FMT_NONE) then dst will be mapped with whatever the most appropriate\n * format to use is (probably the sw_format of the src hwframe context).\n *\n * A return value of AVERROR(ENOSYS) indicates that the mapping is not\n * possible with the given arguments and hwframe setup, while other return\n * values indicate that it failed somehow.\n *\n * @param dst Destination frame, to contain the mapping.\n * @param src Source frame, to be mapped.\n * @param flags Some combination of AV_HWFRAME_MAP_* flags.\n * @return Zero on success, negative AVERROR code on failure.\n */\nint av_hwframe_map(AVFrame *dst, const AVFrame *src, int flags);\n\n\n/**\n * Create and initialise an AVHWFramesContext as a mapping of another existing\n * AVHWFramesContext on a different device.\n *\n * av_hwframe_ctx_init() should not be called after this.\n *\n * @param derived_frame_ctx  On success, a reference to the newly created\n *                           AVHWFramesContext.\n * @param derived_device_ctx A reference to the device to create the new\n *                           AVHWFramesContext on.\n * @param source_frame_ctx   A reference to an existing AVHWFramesContext\n *                           which will be mapped to the derived context.\n * @param flags  Some combination of AV_HWFRAME_MAP_* flags, defining the\n *               mapping parameters to apply to frames which are allocated\n *               in the derived device.\n * @return       Zero on success, negative AVERROR code on failure.\n */\nint av_hwframe_ctx_create_derived(AVBufferRef **derived_frame_ctx,\n                                  enum AVPixelFormat format,\n                                  AVBufferRef *derived_device_ctx,\n                                  AVBufferRef *source_frame_ctx,\n                                  int flags);\n\n#endif /* AVUTIL_HWCONTEXT_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/hwcontext_cuda.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n\n#ifndef AVUTIL_HWCONTEXT_CUDA_H\n#define AVUTIL_HWCONTEXT_CUDA_H\n\n#ifndef CUDA_VERSION\n#include <cuda.h>\n#endif\n\n#include \"pixfmt.h\"\n\n/**\n * @file\n * An API-specific header for AV_HWDEVICE_TYPE_CUDA.\n *\n * This API supports dynamic frame pools. AVHWFramesContext.pool must return\n * AVBufferRefs whose data pointer is a CUdeviceptr.\n */\n\ntypedef struct AVCUDADeviceContextInternal AVCUDADeviceContextInternal;\n\n/**\n * This struct is allocated as AVHWDeviceContext.hwctx\n */\ntypedef struct AVCUDADeviceContext {\n    CUcontext cuda_ctx;\n    AVCUDADeviceContextInternal *internal;\n} AVCUDADeviceContext;\n\n/**\n * AVHWFramesContext.hwctx is currently not used\n */\n\n#endif /* AVUTIL_HWCONTEXT_CUDA_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/hwcontext_d3d11va.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_HWCONTEXT_D3D11VA_H\n#define AVUTIL_HWCONTEXT_D3D11VA_H\n\n/**\n * @file\n * An API-specific header for AV_HWDEVICE_TYPE_D3D11VA.\n *\n * The default pool implementation will be fixed-size if initial_pool_size is\n * set (and allocate elements from an array texture). Otherwise it will allocate\n * individual textures. Be aware that decoding requires a single array texture.\n *\n * Using sw_format==AV_PIX_FMT_YUV420P has special semantics, and maps to\n * DXGI_FORMAT_420_OPAQUE. av_hwframe_transfer_data() is not supported for\n * this format. Refer to MSDN for details.\n *\n * av_hwdevice_ctx_create() for this device type supports a key named \"debug\"\n * for the AVDictionary entry. If this is set to any value, the device creation\n * code will try to load various supported D3D debugging layers.\n */\n\n#include <d3d11.h>\n#include <stdint.h>\n\n/**\n * This struct is allocated as AVHWDeviceContext.hwctx\n */\ntypedef struct AVD3D11VADeviceContext {\n    /**\n     * Device used for texture creation and access. This can also be used to\n     * set the libavcodec decoding device.\n     *\n     * Must be set by the user. This is the only mandatory field - the other\n     * device context fields are set from this and are available for convenience.\n     *\n     * Deallocating the AVHWDeviceContext will always release this interface,\n     * and it does not matter whether it was user-allocated.\n     */\n    ID3D11Device        *device;\n\n    /**\n     * If unset, this will be set from the device field on init.\n     *\n     * Deallocating the AVHWDeviceContext will always release this interface,\n     * and it does not matter whether it was user-allocated.\n     */\n    ID3D11DeviceContext *device_context;\n\n    /**\n     * If unset, this will be set from the device field on init.\n     *\n     * Deallocating the AVHWDeviceContext will always release this interface,\n     * and it does not matter whether it was user-allocated.\n     */\n    ID3D11VideoDevice   *video_device;\n\n    /**\n     * If unset, this will be set from the device_context field on init.\n     *\n     * Deallocating the AVHWDeviceContext will always release this interface,\n     * and it does not matter whether it was user-allocated.\n     */\n    ID3D11VideoContext  *video_context;\n\n    /**\n     * Callbacks for locking. They protect accesses to device_context and\n     * video_context calls. They also protect access to the internal staging\n     * texture (for av_hwframe_transfer_data() calls). They do NOT protect\n     * access to hwcontext or decoder state in general.\n     *\n     * If unset on init, the hwcontext implementation will set them to use an\n     * internal mutex.\n     *\n     * The underlying lock must be recursive. lock_ctx is for free use by the\n     * locking implementation.\n     */\n    void (*lock)(void *lock_ctx);\n    void (*unlock)(void *lock_ctx);\n    void *lock_ctx;\n} AVD3D11VADeviceContext;\n\n/**\n * D3D11 frame descriptor for pool allocation.\n *\n * In user-allocated pools, AVHWFramesContext.pool must return AVBufferRefs\n * with the data pointer pointing at an object of this type describing the\n * planes of the frame.\n *\n * This has no use outside of custom allocation, and AVFrame AVBufferRef do not\n * necessarily point to an instance of this struct.\n */\ntypedef struct AVD3D11FrameDescriptor {\n    /**\n     * The texture in which the frame is located. The reference count is\n     * managed by the AVBufferRef, and destroying the reference will release\n     * the interface.\n     *\n     * Normally stored in AVFrame.data[0].\n     */\n    ID3D11Texture2D *texture;\n\n    /**\n     * The index into the array texture element representing the frame, or 0\n     * if the texture is not an array texture.\n     *\n     * Normally stored in AVFrame.data[1] (cast from intptr_t).\n     */\n    intptr_t index;\n} AVD3D11FrameDescriptor;\n\n/**\n * This struct is allocated as AVHWFramesContext.hwctx\n */\ntypedef struct AVD3D11VAFramesContext {\n    /**\n     * The canonical texture used for pool allocation. If this is set to NULL\n     * on init, the hwframes implementation will allocate and set an array\n     * texture if initial_pool_size > 0.\n     *\n     * The only situation when the API user should set this is:\n     * - the user wants to do manual pool allocation (setting\n     *   AVHWFramesContext.pool), instead of letting AVHWFramesContext\n     *   allocate the pool\n     * - of an array texture\n     * - and wants it to use it for decoding\n     * - this has to be done before calling av_hwframe_ctx_init()\n     *\n     * Deallocating the AVHWFramesContext will always release this interface,\n     * and it does not matter whether it was user-allocated.\n     *\n     * This is in particular used by the libavcodec D3D11VA hwaccel, which\n     * requires a single array texture. It will create ID3D11VideoDecoderOutputView\n     * objects for each array texture element on decoder initialization.\n     */\n    ID3D11Texture2D *texture;\n\n    /**\n     * D3D11_TEXTURE2D_DESC.BindFlags used for texture creation. The user must\n     * at least set D3D11_BIND_DECODER if the frames context is to be used for\n     * video decoding.\n     * This field is ignored/invalid if a user-allocated texture is provided.\n     */\n    UINT BindFlags;\n\n    /**\n     * D3D11_TEXTURE2D_DESC.MiscFlags used for texture creation.\n     * This field is ignored/invalid if a user-allocated texture is provided.\n     */\n    UINT MiscFlags;\n} AVD3D11VAFramesContext;\n\n#endif /* AVUTIL_HWCONTEXT_D3D11VA_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/hwcontext_drm.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_HWCONTEXT_DRM_H\n#define AVUTIL_HWCONTEXT_DRM_H\n\n#include <stddef.h>\n#include <stdint.h>\n\n/**\n * @file\n * API-specific header for AV_HWDEVICE_TYPE_DRM.\n *\n * Internal frame allocation is not currently supported - all frames\n * must be allocated by the user.  Thus AVHWFramesContext is always\n * NULL, though this may change if support for frame allocation is\n * added in future.\n */\n\nenum {\n    /**\n     * The maximum number of layers/planes in a DRM frame.\n     */\n    AV_DRM_MAX_PLANES = 4\n};\n\n/**\n * DRM object descriptor.\n *\n * Describes a single DRM object, addressing it as a PRIME file\n * descriptor.\n */\ntypedef struct AVDRMObjectDescriptor {\n    /**\n     * DRM PRIME fd for the object.\n     */\n    int fd;\n    /**\n     * Total size of the object.\n     *\n     * (This includes any parts not which do not contain image data.)\n     */\n    size_t size;\n    /**\n     * Format modifier applied to the object (DRM_FORMAT_MOD_*).\n     *\n     * If the format modifier is unknown then this should be set to\n     * DRM_FORMAT_MOD_INVALID.\n     */\n    uint64_t format_modifier;\n} AVDRMObjectDescriptor;\n\n/**\n * DRM plane descriptor.\n *\n * Describes a single plane of a layer, which is contained within\n * a single object.\n */\ntypedef struct AVDRMPlaneDescriptor {\n    /**\n     * Index of the object containing this plane in the objects\n     * array of the enclosing frame descriptor.\n     */\n    int object_index;\n    /**\n     * Offset within that object of this plane.\n     */\n    ptrdiff_t offset;\n    /**\n     * Pitch (linesize) of this plane.\n     */\n    ptrdiff_t pitch;\n} AVDRMPlaneDescriptor;\n\n/**\n * DRM layer descriptor.\n *\n * Describes a single layer within a frame.  This has the structure\n * defined by its format, and will contain one or more planes.\n */\ntypedef struct AVDRMLayerDescriptor {\n    /**\n     * Format of the layer (DRM_FORMAT_*).\n     */\n    uint32_t format;\n    /**\n     * Number of planes in the layer.\n     *\n     * This must match the number of planes required by format.\n     */\n    int nb_planes;\n    /**\n     * Array of planes in this layer.\n     */\n    AVDRMPlaneDescriptor planes[AV_DRM_MAX_PLANES];\n} AVDRMLayerDescriptor;\n\n/**\n * DRM frame descriptor.\n *\n * This is used as the data pointer for AV_PIX_FMT_DRM_PRIME frames.\n * It is also used by user-allocated frame pools - allocating in\n * AVHWFramesContext.pool must return AVBufferRefs which contain\n * an object of this type.\n *\n * The fields of this structure should be set such it can be\n * imported directly by EGL using the EGL_EXT_image_dma_buf_import\n * and EGL_EXT_image_dma_buf_import_modifiers extensions.\n * (Note that the exact layout of a particular format may vary between\n * platforms - we only specify that the same platform should be able\n * to import it.)\n *\n * The total number of planes must not exceed AV_DRM_MAX_PLANES, and\n * the order of the planes by increasing layer index followed by\n * increasing plane index must be the same as the order which would\n * be used for the data pointers in the equivalent software format.\n */\ntypedef struct AVDRMFrameDescriptor {\n    /**\n     * Number of DRM objects making up this frame.\n     */\n    int nb_objects;\n    /**\n     * Array of objects making up the frame.\n     */\n    AVDRMObjectDescriptor objects[AV_DRM_MAX_PLANES];\n    /**\n     * Number of layers in the frame.\n     */\n    int nb_layers;\n    /**\n     * Array of layers in the frame.\n     */\n    AVDRMLayerDescriptor layers[AV_DRM_MAX_PLANES];\n} AVDRMFrameDescriptor;\n\n/**\n * DRM device.\n *\n * Allocated as AVHWDeviceContext.hwctx.\n */\ntypedef struct AVDRMDeviceContext {\n    /**\n     * File descriptor of DRM device.\n     *\n     * This is used as the device to create frames on, and may also be\n     * used in some derivation and mapping operations.\n     *\n     * If no device is required, set to -1.\n     */\n    int fd;\n} AVDRMDeviceContext;\n\n#endif /* AVUTIL_HWCONTEXT_DRM_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/hwcontext_dxva2.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n\n#ifndef AVUTIL_HWCONTEXT_DXVA2_H\n#define AVUTIL_HWCONTEXT_DXVA2_H\n\n/**\n * @file\n * An API-specific header for AV_HWDEVICE_TYPE_DXVA2.\n *\n * Only fixed-size pools are supported.\n *\n * For user-allocated pools, AVHWFramesContext.pool must return AVBufferRefs\n * with the data pointer set to a pointer to IDirect3DSurface9.\n */\n\n#include <d3d9.h>\n#include <dxva2api.h>\n\n/**\n * This struct is allocated as AVHWDeviceContext.hwctx\n */\ntypedef struct AVDXVA2DeviceContext {\n    IDirect3DDeviceManager9 *devmgr;\n} AVDXVA2DeviceContext;\n\n/**\n * This struct is allocated as AVHWFramesContext.hwctx\n */\ntypedef struct AVDXVA2FramesContext {\n    /**\n     * The surface type (e.g. DXVA2_VideoProcessorRenderTarget or\n     * DXVA2_VideoDecoderRenderTarget). Must be set by the caller.\n     */\n    DWORD               surface_type;\n\n    /**\n     * The surface pool. When an external pool is not provided by the caller,\n     * this will be managed (allocated and filled on init, freed on uninit) by\n     * libavutil.\n     */\n    IDirect3DSurface9 **surfaces;\n    int              nb_surfaces;\n\n    /**\n     * Certain drivers require the decoder to be destroyed before the surfaces.\n     * To allow internally managed pools to work properly in such cases, this\n     * field is provided.\n     *\n     * If it is non-NULL, libavutil will call IDirectXVideoDecoder_Release() on\n     * it just before the internal surface pool is freed.\n     *\n     * This is for convenience only. Some code uses other methods to manage the\n     * decoder reference.\n     */\n    IDirectXVideoDecoder *decoder_to_release;\n} AVDXVA2FramesContext;\n\n#endif /* AVUTIL_HWCONTEXT_DXVA2_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/hwcontext_mediacodec.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_HWCONTEXT_MEDIACODEC_H\n#define AVUTIL_HWCONTEXT_MEDIACODEC_H\n\n/**\n * MediaCodec details.\n *\n * Allocated as AVHWDeviceContext.hwctx\n */\ntypedef struct AVMediaCodecDeviceContext {\n    /**\n     * android/view/Surface handle, to be filled by the user.\n     *\n     * This is the default surface used by decoders on this device.\n     */\n    void *surface;\n} AVMediaCodecDeviceContext;\n\n#endif /* AVUTIL_HWCONTEXT_MEDIACODEC_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/hwcontext_qsv.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_HWCONTEXT_QSV_H\n#define AVUTIL_HWCONTEXT_QSV_H\n\n#include <mfx/mfxvideo.h>\n\n/**\n * @file\n * An API-specific header for AV_HWDEVICE_TYPE_QSV.\n *\n * This API does not support dynamic frame pools. AVHWFramesContext.pool must\n * contain AVBufferRefs whose data pointer points to an mfxFrameSurface1 struct.\n */\n\n/**\n * This struct is allocated as AVHWDeviceContext.hwctx\n */\ntypedef struct AVQSVDeviceContext {\n    mfxSession session;\n} AVQSVDeviceContext;\n\n/**\n * This struct is allocated as AVHWFramesContext.hwctx\n */\ntypedef struct AVQSVFramesContext {\n    mfxFrameSurface1 *surfaces;\n    int            nb_surfaces;\n\n    /**\n     * A combination of MFX_MEMTYPE_* describing the frame pool.\n     */\n    int frame_type;\n} AVQSVFramesContext;\n\n#endif /* AVUTIL_HWCONTEXT_QSV_H */\n\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/hwcontext_vaapi.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_HWCONTEXT_VAAPI_H\n#define AVUTIL_HWCONTEXT_VAAPI_H\n\n#include <va/va.h>\n\n/**\n * @file\n * API-specific header for AV_HWDEVICE_TYPE_VAAPI.\n *\n * Dynamic frame pools are supported, but note that any pool used as a render\n * target is required to be of fixed size in order to be be usable as an\n * argument to vaCreateContext().\n *\n * For user-allocated pools, AVHWFramesContext.pool must return AVBufferRefs\n * with the data pointer set to a VASurfaceID.\n */\n\nenum {\n    /**\n     * The quirks field has been set by the user and should not be detected\n     * automatically by av_hwdevice_ctx_init().\n     */\n    AV_VAAPI_DRIVER_QUIRK_USER_SET = (1 << 0),\n    /**\n     * The driver does not destroy parameter buffers when they are used by\n     * vaRenderPicture().  Additional code will be required to destroy them\n     * separately afterwards.\n     */\n    AV_VAAPI_DRIVER_QUIRK_RENDER_PARAM_BUFFERS = (1 << 1),\n\n    /**\n     * The driver does not support the VASurfaceAttribMemoryType attribute,\n     * so the surface allocation code will not try to use it.\n     */\n    AV_VAAPI_DRIVER_QUIRK_ATTRIB_MEMTYPE = (1 << 2),\n\n    /**\n     * The driver does not support surface attributes at all.\n     * The surface allocation code will never pass them to surface allocation,\n     * and the results of the vaQuerySurfaceAttributes() call will be faked.\n     */\n    AV_VAAPI_DRIVER_QUIRK_SURFACE_ATTRIBUTES = (1 << 3),\n};\n\n/**\n * VAAPI connection details.\n *\n * Allocated as AVHWDeviceContext.hwctx\n */\ntypedef struct AVVAAPIDeviceContext {\n    /**\n     * The VADisplay handle, to be filled by the user.\n     */\n    VADisplay display;\n    /**\n     * Driver quirks to apply - this is filled by av_hwdevice_ctx_init(),\n     * with reference to a table of known drivers, unless the\n     * AV_VAAPI_DRIVER_QUIRK_USER_SET bit is already present.  The user\n     * may need to refer to this field when performing any later\n     * operations using VAAPI with the same VADisplay.\n     */\n    unsigned int driver_quirks;\n} AVVAAPIDeviceContext;\n\n/**\n * VAAPI-specific data associated with a frame pool.\n *\n * Allocated as AVHWFramesContext.hwctx.\n */\ntypedef struct AVVAAPIFramesContext {\n    /**\n     * Set by the user to apply surface attributes to all surfaces in\n     * the frame pool.  If null, default settings are used.\n     */\n    VASurfaceAttrib *attributes;\n    int           nb_attributes;\n    /**\n     * The surfaces IDs of all surfaces in the pool after creation.\n     * Only valid if AVHWFramesContext.initial_pool_size was positive.\n     * These are intended to be used as the render_targets arguments to\n     * vaCreateContext().\n     */\n    VASurfaceID     *surface_ids;\n    int           nb_surfaces;\n} AVVAAPIFramesContext;\n\n/**\n * VAAPI hardware pipeline configuration details.\n *\n * Allocated with av_hwdevice_hwconfig_alloc().\n */\ntypedef struct AVVAAPIHWConfig {\n    /**\n     * ID of a VAAPI pipeline configuration.\n     */\n    VAConfigID config_id;\n} AVVAAPIHWConfig;\n\n#endif /* AVUTIL_HWCONTEXT_VAAPI_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/hwcontext_vdpau.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_HWCONTEXT_VDPAU_H\n#define AVUTIL_HWCONTEXT_VDPAU_H\n\n#include <vdpau/vdpau.h>\n\n/**\n * @file\n * An API-specific header for AV_HWDEVICE_TYPE_VDPAU.\n *\n * This API supports dynamic frame pools. AVHWFramesContext.pool must return\n * AVBufferRefs whose data pointer is a VdpVideoSurface.\n */\n\n/**\n * This struct is allocated as AVHWDeviceContext.hwctx\n */\ntypedef struct AVVDPAUDeviceContext {\n    VdpDevice          device;\n    VdpGetProcAddress *get_proc_address;\n} AVVDPAUDeviceContext;\n\n/**\n * AVHWFramesContext.hwctx is currently not used\n */\n\n#endif /* AVUTIL_HWCONTEXT_VDPAU_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/hwcontext_videotoolbox.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_HWCONTEXT_VIDEOTOOLBOX_H\n#define AVUTIL_HWCONTEXT_VIDEOTOOLBOX_H\n\n#include <stdint.h>\n\n#include <VideoToolbox/VideoToolbox.h>\n\n#include \"pixfmt.h\"\n\n/**\n * @file\n * An API-specific header for AV_HWDEVICE_TYPE_VIDEOTOOLBOX.\n *\n * This API currently does not support frame allocation, as the raw VideoToolbox\n * API does allocation, and FFmpeg itself never has the need to allocate frames.\n *\n * If the API user sets a custom pool, AVHWFramesContext.pool must return\n * AVBufferRefs whose data pointer is a CVImageBufferRef or CVPixelBufferRef.\n *\n * Currently AVHWDeviceContext.hwctx and AVHWFramesContext.hwctx are always\n * NULL.\n */\n\n/**\n * Convert a VideoToolbox (actually CoreVideo) format to AVPixelFormat.\n * Returns AV_PIX_FMT_NONE if no known equivalent was found.\n */\nenum AVPixelFormat av_map_videotoolbox_format_to_pixfmt(uint32_t cv_fmt);\n\n/**\n * Convert an AVPixelFormat to a VideoToolbox (actually CoreVideo) format.\n * Returns 0 if no known equivalent was found.\n */\nuint32_t av_map_videotoolbox_format_from_pixfmt(enum AVPixelFormat pix_fmt);\n\n#endif /* AVUTIL_HWCONTEXT_VIDEOTOOLBOX_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/imgutils.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_IMGUTILS_H\n#define AVUTIL_IMGUTILS_H\n\n/**\n * @file\n * misc image utilities\n *\n * @addtogroup lavu_picture\n * @{\n */\n\n#include \"avutil.h\"\n#include \"pixdesc.h\"\n#include \"rational.h\"\n\n/**\n * Compute the max pixel step for each plane of an image with a\n * format described by pixdesc.\n *\n * The pixel step is the distance in bytes between the first byte of\n * the group of bytes which describe a pixel component and the first\n * byte of the successive group in the same plane for the same\n * component.\n *\n * @param max_pixsteps an array which is filled with the max pixel step\n * for each plane. Since a plane may contain different pixel\n * components, the computed max_pixsteps[plane] is relative to the\n * component in the plane with the max pixel step.\n * @param max_pixstep_comps an array which is filled with the component\n * for each plane which has the max pixel step. May be NULL.\n */\nvoid av_image_fill_max_pixsteps(int max_pixsteps[4], int max_pixstep_comps[4],\n                                const AVPixFmtDescriptor *pixdesc);\n\n/**\n * Compute the size of an image line with format pix_fmt and width\n * width for the plane plane.\n *\n * @return the computed size in bytes\n */\nint av_image_get_linesize(enum AVPixelFormat pix_fmt, int width, int plane);\n\n/**\n * Fill plane linesizes for an image with pixel format pix_fmt and\n * width width.\n *\n * @param linesizes array to be filled with the linesize for each plane\n * @return >= 0 in case of success, a negative error code otherwise\n */\nint av_image_fill_linesizes(int linesizes[4], enum AVPixelFormat pix_fmt, int width);\n\n/**\n * Fill plane data pointers for an image with pixel format pix_fmt and\n * height height.\n *\n * @param data pointers array to be filled with the pointer for each image plane\n * @param ptr the pointer to a buffer which will contain the image\n * @param linesizes the array containing the linesize for each\n * plane, should be filled by av_image_fill_linesizes()\n * @return the size in bytes required for the image buffer, a negative\n * error code in case of failure\n */\nint av_image_fill_pointers(uint8_t *data[4], enum AVPixelFormat pix_fmt, int height,\n                           uint8_t *ptr, const int linesizes[4]);\n\n/**\n * Allocate an image with size w and h and pixel format pix_fmt, and\n * fill pointers and linesizes accordingly.\n * The allocated image buffer has to be freed by using\n * av_freep(&pointers[0]).\n *\n * @param align the value to use for buffer size alignment\n * @return the size in bytes required for the image buffer, a negative\n * error code in case of failure\n */\nint av_image_alloc(uint8_t *pointers[4], int linesizes[4],\n                   int w, int h, enum AVPixelFormat pix_fmt, int align);\n\n/**\n * Copy image plane from src to dst.\n * That is, copy \"height\" number of lines of \"bytewidth\" bytes each.\n * The first byte of each successive line is separated by *_linesize\n * bytes.\n *\n * bytewidth must be contained by both absolute values of dst_linesize\n * and src_linesize, otherwise the function behavior is undefined.\n *\n * @param dst_linesize linesize for the image plane in dst\n * @param src_linesize linesize for the image plane in src\n */\nvoid av_image_copy_plane(uint8_t       *dst, int dst_linesize,\n                         const uint8_t *src, int src_linesize,\n                         int bytewidth, int height);\n\n/**\n * Copy image in src_data to dst_data.\n *\n * @param dst_linesizes linesizes for the image in dst_data\n * @param src_linesizes linesizes for the image in src_data\n */\nvoid av_image_copy(uint8_t *dst_data[4], int dst_linesizes[4],\n                   const uint8_t *src_data[4], const int src_linesizes[4],\n                   enum AVPixelFormat pix_fmt, int width, int height);\n\n/**\n * Copy image data located in uncacheable (e.g. GPU mapped) memory. Where\n * available, this function will use special functionality for reading from such\n * memory, which may result in greatly improved performance compared to plain\n * av_image_copy().\n *\n * The data pointers and the linesizes must be aligned to the maximum required\n * by the CPU architecture.\n *\n * @note The linesize parameters have the type ptrdiff_t here, while they are\n *       int for av_image_copy().\n * @note On x86, the linesizes currently need to be aligned to the cacheline\n *       size (i.e. 64) to get improved performance.\n */\nvoid av_image_copy_uc_from(uint8_t *dst_data[4],       const ptrdiff_t dst_linesizes[4],\n                           const uint8_t *src_data[4], const ptrdiff_t src_linesizes[4],\n                           enum AVPixelFormat pix_fmt, int width, int height);\n\n/**\n * Setup the data pointers and linesizes based on the specified image\n * parameters and the provided array.\n *\n * The fields of the given image are filled in by using the src\n * address which points to the image data buffer. Depending on the\n * specified pixel format, one or multiple image data pointers and\n * line sizes will be set.  If a planar format is specified, several\n * pointers will be set pointing to the different picture planes and\n * the line sizes of the different planes will be stored in the\n * lines_sizes array. Call with src == NULL to get the required\n * size for the src buffer.\n *\n * To allocate the buffer and fill in the dst_data and dst_linesize in\n * one call, use av_image_alloc().\n *\n * @param dst_data      data pointers to be filled in\n * @param dst_linesize  linesizes for the image in dst_data to be filled in\n * @param src           buffer which will contain or contains the actual image data, can be NULL\n * @param pix_fmt       the pixel format of the image\n * @param width         the width of the image in pixels\n * @param height        the height of the image in pixels\n * @param align         the value used in src for linesize alignment\n * @return the size in bytes required for src, a negative error code\n * in case of failure\n */\nint av_image_fill_arrays(uint8_t *dst_data[4], int dst_linesize[4],\n                         const uint8_t *src,\n                         enum AVPixelFormat pix_fmt, int width, int height, int align);\n\n/**\n * Return the size in bytes of the amount of data required to store an\n * image with the given parameters.\n *\n * @param pix_fmt  the pixel format of the image\n * @param width    the width of the image in pixels\n * @param height   the height of the image in pixels\n * @param align    the assumed linesize alignment\n * @return the buffer size in bytes, a negative error code in case of failure\n */\nint av_image_get_buffer_size(enum AVPixelFormat pix_fmt, int width, int height, int align);\n\n/**\n * Copy image data from an image into a buffer.\n *\n * av_image_get_buffer_size() can be used to compute the required size\n * for the buffer to fill.\n *\n * @param dst           a buffer into which picture data will be copied\n * @param dst_size      the size in bytes of dst\n * @param src_data      pointers containing the source image data\n * @param src_linesize  linesizes for the image in src_data\n * @param pix_fmt       the pixel format of the source image\n * @param width         the width of the source image in pixels\n * @param height        the height of the source image in pixels\n * @param align         the assumed linesize alignment for dst\n * @return the number of bytes written to dst, or a negative value\n * (error code) on error\n */\nint av_image_copy_to_buffer(uint8_t *dst, int dst_size,\n                            const uint8_t * const src_data[4], const int src_linesize[4],\n                            enum AVPixelFormat pix_fmt, int width, int height, int align);\n\n/**\n * Check if the given dimension of an image is valid, meaning that all\n * bytes of the image can be addressed with a signed int.\n *\n * @param w the width of the picture\n * @param h the height of the picture\n * @param log_offset the offset to sum to the log level for logging with log_ctx\n * @param log_ctx the parent logging context, it may be NULL\n * @return >= 0 if valid, a negative error code otherwise\n */\nint av_image_check_size(unsigned int w, unsigned int h, int log_offset, void *log_ctx);\n\n/**\n * Check if the given dimension of an image is valid, meaning that all\n * bytes of a plane of an image with the specified pix_fmt can be addressed\n * with a signed int.\n *\n * @param w the width of the picture\n * @param h the height of the picture\n * @param max_pixels the maximum number of pixels the user wants to accept\n * @param pix_fmt the pixel format, can be AV_PIX_FMT_NONE if unknown.\n * @param log_offset the offset to sum to the log level for logging with log_ctx\n * @param log_ctx the parent logging context, it may be NULL\n * @return >= 0 if valid, a negative error code otherwise\n */\nint av_image_check_size2(unsigned int w, unsigned int h, int64_t max_pixels, enum AVPixelFormat pix_fmt, int log_offset, void *log_ctx);\n\n/**\n * Check if the given sample aspect ratio of an image is valid.\n *\n * It is considered invalid if the denominator is 0 or if applying the ratio\n * to the image size would make the smaller dimension less than 1. If the\n * sar numerator is 0, it is considered unknown and will return as valid.\n *\n * @param w width of the image\n * @param h height of the image\n * @param sar sample aspect ratio of the image\n * @return 0 if valid, a negative AVERROR code otherwise\n */\nint av_image_check_sar(unsigned int w, unsigned int h, AVRational sar);\n\n/**\n * Overwrite the image data with black. This is suitable for filling a\n * sub-rectangle of an image, meaning the padding between the right most pixel\n * and the left most pixel on the next line will not be overwritten. For some\n * formats, the image size might be rounded up due to inherent alignment.\n *\n * If the pixel format has alpha, the alpha is cleared to opaque.\n *\n * This can return an error if the pixel format is not supported. Normally, all\n * non-hwaccel pixel formats should be supported.\n *\n * Passing NULL for dst_data is allowed. Then the function returns whether the\n * operation would have succeeded. (It can return an error if the pix_fmt is\n * not supported.)\n *\n * @param dst_data      data pointers to destination image\n * @param dst_linesize  linesizes for the destination image\n * @param pix_fmt       the pixel format of the image\n * @param range         the color range of the image (important for colorspaces such as YUV)\n * @param width         the width of the image in pixels\n * @param height        the height of the image in pixels\n * @return 0 if the image data was cleared, a negative AVERROR code otherwise\n */\nint av_image_fill_black(uint8_t *dst_data[4], const ptrdiff_t dst_linesize[4],\n                        enum AVPixelFormat pix_fmt, enum AVColorRange range,\n                        int width, int height);\n\n/**\n * @}\n */\n\n\n#endif /* AVUTIL_IMGUTILS_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/intfloat.h",
    "content": "/*\n * Copyright (c) 2011 Mans Rullgard\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_INTFLOAT_H\n#define AVUTIL_INTFLOAT_H\n\n#include <stdint.h>\n#include \"attributes.h\"\n\nunion av_intfloat32 {\n    uint32_t i;\n    float    f;\n};\n\nunion av_intfloat64 {\n    uint64_t i;\n    double   f;\n};\n\n/**\n * Reinterpret a 32-bit integer as a float.\n */\nstatic av_always_inline float av_int2float(uint32_t i)\n{\n    union av_intfloat32 v;\n    v.i = i;\n    return v.f;\n}\n\n/**\n * Reinterpret a float as a 32-bit integer.\n */\nstatic av_always_inline uint32_t av_float2int(float f)\n{\n    union av_intfloat32 v;\n    v.f = f;\n    return v.i;\n}\n\n/**\n * Reinterpret a 64-bit integer as a double.\n */\nstatic av_always_inline double av_int2double(uint64_t i)\n{\n    union av_intfloat64 v;\n    v.i = i;\n    return v.f;\n}\n\n/**\n * Reinterpret a double as a 64-bit integer.\n */\nstatic av_always_inline uint64_t av_double2int(double f)\n{\n    union av_intfloat64 v;\n    v.f = f;\n    return v.i;\n}\n\n#endif /* AVUTIL_INTFLOAT_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/intreadwrite.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_INTREADWRITE_H\n#define AVUTIL_INTREADWRITE_H\n\n#include <stdint.h>\n#include \"libavutil/avconfig.h\"\n#include \"attributes.h\"\n#include \"bswap.h\"\n\ntypedef union {\n    uint64_t u64;\n    uint32_t u32[2];\n    uint16_t u16[4];\n    uint8_t  u8 [8];\n    double   f64;\n    float    f32[2];\n} av_alias av_alias64;\n\ntypedef union {\n    uint32_t u32;\n    uint16_t u16[2];\n    uint8_t  u8 [4];\n    float    f32;\n} av_alias av_alias32;\n\ntypedef union {\n    uint16_t u16;\n    uint8_t  u8 [2];\n} av_alias av_alias16;\n\n/*\n * Arch-specific headers can provide any combination of\n * AV_[RW][BLN](16|24|32|48|64) and AV_(COPY|SWAP|ZERO)(64|128) macros.\n * Preprocessor symbols must be defined, even if these are implemented\n * as inline functions.\n *\n * R/W means read/write, B/L/N means big/little/native endianness.\n * The following macros require aligned access, compared to their\n * unaligned variants: AV_(COPY|SWAP|ZERO)(64|128), AV_[RW]N[8-64]A.\n * Incorrect usage may range from abysmal performance to crash\n * depending on the platform.\n *\n * The unaligned variants are AV_[RW][BLN][8-64] and AV_COPY*U.\n */\n\n#ifdef HAVE_AV_CONFIG_H\n\n#include \"config.h\"\n\n#if   ARCH_ARM\n#   include \"arm/intreadwrite.h\"\n#elif ARCH_AVR32\n#   include \"avr32/intreadwrite.h\"\n#elif ARCH_MIPS\n#   include \"mips/intreadwrite.h\"\n#elif ARCH_PPC\n#   include \"ppc/intreadwrite.h\"\n#elif ARCH_TOMI\n#   include \"tomi/intreadwrite.h\"\n#elif ARCH_X86\n#   include \"x86/intreadwrite.h\"\n#endif\n\n#endif /* HAVE_AV_CONFIG_H */\n\n/*\n * Map AV_RNXX <-> AV_R[BL]XX for all variants provided by per-arch headers.\n */\n\n#if AV_HAVE_BIGENDIAN\n\n#   if    defined(AV_RN16) && !defined(AV_RB16)\n#       define AV_RB16(p) AV_RN16(p)\n#   elif !defined(AV_RN16) &&  defined(AV_RB16)\n#       define AV_RN16(p) AV_RB16(p)\n#   endif\n\n#   if    defined(AV_WN16) && !defined(AV_WB16)\n#       define AV_WB16(p, v) AV_WN16(p, v)\n#   elif !defined(AV_WN16) &&  defined(AV_WB16)\n#       define AV_WN16(p, v) AV_WB16(p, v)\n#   endif\n\n#   if    defined(AV_RN24) && !defined(AV_RB24)\n#       define AV_RB24(p) AV_RN24(p)\n#   elif !defined(AV_RN24) &&  defined(AV_RB24)\n#       define AV_RN24(p) AV_RB24(p)\n#   endif\n\n#   if    defined(AV_WN24) && !defined(AV_WB24)\n#       define AV_WB24(p, v) AV_WN24(p, v)\n#   elif !defined(AV_WN24) &&  defined(AV_WB24)\n#       define AV_WN24(p, v) AV_WB24(p, v)\n#   endif\n\n#   if    defined(AV_RN32) && !defined(AV_RB32)\n#       define AV_RB32(p) AV_RN32(p)\n#   elif !defined(AV_RN32) &&  defined(AV_RB32)\n#       define AV_RN32(p) AV_RB32(p)\n#   endif\n\n#   if    defined(AV_WN32) && !defined(AV_WB32)\n#       define AV_WB32(p, v) AV_WN32(p, v)\n#   elif !defined(AV_WN32) &&  defined(AV_WB32)\n#       define AV_WN32(p, v) AV_WB32(p, v)\n#   endif\n\n#   if    defined(AV_RN48) && !defined(AV_RB48)\n#       define AV_RB48(p) AV_RN48(p)\n#   elif !defined(AV_RN48) &&  defined(AV_RB48)\n#       define AV_RN48(p) AV_RB48(p)\n#   endif\n\n#   if    defined(AV_WN48) && !defined(AV_WB48)\n#       define AV_WB48(p, v) AV_WN48(p, v)\n#   elif !defined(AV_WN48) &&  defined(AV_WB48)\n#       define AV_WN48(p, v) AV_WB48(p, v)\n#   endif\n\n#   if    defined(AV_RN64) && !defined(AV_RB64)\n#       define AV_RB64(p) AV_RN64(p)\n#   elif !defined(AV_RN64) &&  defined(AV_RB64)\n#       define AV_RN64(p) AV_RB64(p)\n#   endif\n\n#   if    defined(AV_WN64) && !defined(AV_WB64)\n#       define AV_WB64(p, v) AV_WN64(p, v)\n#   elif !defined(AV_WN64) &&  defined(AV_WB64)\n#       define AV_WN64(p, v) AV_WB64(p, v)\n#   endif\n\n#else /* AV_HAVE_BIGENDIAN */\n\n#   if    defined(AV_RN16) && !defined(AV_RL16)\n#       define AV_RL16(p) AV_RN16(p)\n#   elif !defined(AV_RN16) &&  defined(AV_RL16)\n#       define AV_RN16(p) AV_RL16(p)\n#   endif\n\n#   if    defined(AV_WN16) && !defined(AV_WL16)\n#       define AV_WL16(p, v) AV_WN16(p, v)\n#   elif !defined(AV_WN16) &&  defined(AV_WL16)\n#       define AV_WN16(p, v) AV_WL16(p, v)\n#   endif\n\n#   if    defined(AV_RN24) && !defined(AV_RL24)\n#       define AV_RL24(p) AV_RN24(p)\n#   elif !defined(AV_RN24) &&  defined(AV_RL24)\n#       define AV_RN24(p) AV_RL24(p)\n#   endif\n\n#   if    defined(AV_WN24) && !defined(AV_WL24)\n#       define AV_WL24(p, v) AV_WN24(p, v)\n#   elif !defined(AV_WN24) &&  defined(AV_WL24)\n#       define AV_WN24(p, v) AV_WL24(p, v)\n#   endif\n\n#   if    defined(AV_RN32) && !defined(AV_RL32)\n#       define AV_RL32(p) AV_RN32(p)\n#   elif !defined(AV_RN32) &&  defined(AV_RL32)\n#       define AV_RN32(p) AV_RL32(p)\n#   endif\n\n#   if    defined(AV_WN32) && !defined(AV_WL32)\n#       define AV_WL32(p, v) AV_WN32(p, v)\n#   elif !defined(AV_WN32) &&  defined(AV_WL32)\n#       define AV_WN32(p, v) AV_WL32(p, v)\n#   endif\n\n#   if    defined(AV_RN48) && !defined(AV_RL48)\n#       define AV_RL48(p) AV_RN48(p)\n#   elif !defined(AV_RN48) &&  defined(AV_RL48)\n#       define AV_RN48(p) AV_RL48(p)\n#   endif\n\n#   if    defined(AV_WN48) && !defined(AV_WL48)\n#       define AV_WL48(p, v) AV_WN48(p, v)\n#   elif !defined(AV_WN48) &&  defined(AV_WL48)\n#       define AV_WN48(p, v) AV_WL48(p, v)\n#   endif\n\n#   if    defined(AV_RN64) && !defined(AV_RL64)\n#       define AV_RL64(p) AV_RN64(p)\n#   elif !defined(AV_RN64) &&  defined(AV_RL64)\n#       define AV_RN64(p) AV_RL64(p)\n#   endif\n\n#   if    defined(AV_WN64) && !defined(AV_WL64)\n#       define AV_WL64(p, v) AV_WN64(p, v)\n#   elif !defined(AV_WN64) &&  defined(AV_WL64)\n#       define AV_WN64(p, v) AV_WL64(p, v)\n#   endif\n\n#endif /* !AV_HAVE_BIGENDIAN */\n\n/*\n * Define AV_[RW]N helper macros to simplify definitions not provided\n * by per-arch headers.\n */\n\n#if defined(__GNUC__)\n\nunion unaligned_64 { uint64_t l; } __attribute__((packed)) av_alias;\nunion unaligned_32 { uint32_t l; } __attribute__((packed)) av_alias;\nunion unaligned_16 { uint16_t l; } __attribute__((packed)) av_alias;\n\n#   define AV_RN(s, p) (((const union unaligned_##s *) (p))->l)\n#   define AV_WN(s, p, v) ((((union unaligned_##s *) (p))->l) = (v))\n\n#elif defined(_MSC_VER) && (defined(_M_ARM) || defined(_M_X64) || defined(_M_ARM64)) && AV_HAVE_FAST_UNALIGNED\n\n#   define AV_RN(s, p) (*((const __unaligned uint##s##_t*)(p)))\n#   define AV_WN(s, p, v) (*((__unaligned uint##s##_t*)(p)) = (v))\n\n#elif AV_HAVE_FAST_UNALIGNED\n\n#   define AV_RN(s, p) (((const av_alias##s*)(p))->u##s)\n#   define AV_WN(s, p, v) (((av_alias##s*)(p))->u##s = (v))\n\n#else\n\n#ifndef AV_RB16\n#   define AV_RB16(x)                           \\\n    ((((const uint8_t*)(x))[0] << 8) |          \\\n      ((const uint8_t*)(x))[1])\n#endif\n#ifndef AV_WB16\n#   define AV_WB16(p, val) do {                 \\\n        uint16_t d = (val);                     \\\n        ((uint8_t*)(p))[1] = (d);               \\\n        ((uint8_t*)(p))[0] = (d)>>8;            \\\n    } while(0)\n#endif\n\n#ifndef AV_RL16\n#   define AV_RL16(x)                           \\\n    ((((const uint8_t*)(x))[1] << 8) |          \\\n      ((const uint8_t*)(x))[0])\n#endif\n#ifndef AV_WL16\n#   define AV_WL16(p, val) do {                 \\\n        uint16_t d = (val);                     \\\n        ((uint8_t*)(p))[0] = (d);               \\\n        ((uint8_t*)(p))[1] = (d)>>8;            \\\n    } while(0)\n#endif\n\n#ifndef AV_RB32\n#   define AV_RB32(x)                                \\\n    (((uint32_t)((const uint8_t*)(x))[0] << 24) |    \\\n               (((const uint8_t*)(x))[1] << 16) |    \\\n               (((const uint8_t*)(x))[2] <<  8) |    \\\n                ((const uint8_t*)(x))[3])\n#endif\n#ifndef AV_WB32\n#   define AV_WB32(p, val) do {                 \\\n        uint32_t d = (val);                     \\\n        ((uint8_t*)(p))[3] = (d);               \\\n        ((uint8_t*)(p))[2] = (d)>>8;            \\\n        ((uint8_t*)(p))[1] = (d)>>16;           \\\n        ((uint8_t*)(p))[0] = (d)>>24;           \\\n    } while(0)\n#endif\n\n#ifndef AV_RL32\n#   define AV_RL32(x)                                \\\n    (((uint32_t)((const uint8_t*)(x))[3] << 24) |    \\\n               (((const uint8_t*)(x))[2] << 16) |    \\\n               (((const uint8_t*)(x))[1] <<  8) |    \\\n                ((const uint8_t*)(x))[0])\n#endif\n#ifndef AV_WL32\n#   define AV_WL32(p, val) do {                 \\\n        uint32_t d = (val);                     \\\n        ((uint8_t*)(p))[0] = (d);               \\\n        ((uint8_t*)(p))[1] = (d)>>8;            \\\n        ((uint8_t*)(p))[2] = (d)>>16;           \\\n        ((uint8_t*)(p))[3] = (d)>>24;           \\\n    } while(0)\n#endif\n\n#ifndef AV_RB64\n#   define AV_RB64(x)                                   \\\n    (((uint64_t)((const uint8_t*)(x))[0] << 56) |       \\\n     ((uint64_t)((const uint8_t*)(x))[1] << 48) |       \\\n     ((uint64_t)((const uint8_t*)(x))[2] << 40) |       \\\n     ((uint64_t)((const uint8_t*)(x))[3] << 32) |       \\\n     ((uint64_t)((const uint8_t*)(x))[4] << 24) |       \\\n     ((uint64_t)((const uint8_t*)(x))[5] << 16) |       \\\n     ((uint64_t)((const uint8_t*)(x))[6] <<  8) |       \\\n      (uint64_t)((const uint8_t*)(x))[7])\n#endif\n#ifndef AV_WB64\n#   define AV_WB64(p, val) do {                 \\\n        uint64_t d = (val);                     \\\n        ((uint8_t*)(p))[7] = (d);               \\\n        ((uint8_t*)(p))[6] = (d)>>8;            \\\n        ((uint8_t*)(p))[5] = (d)>>16;           \\\n        ((uint8_t*)(p))[4] = (d)>>24;           \\\n        ((uint8_t*)(p))[3] = (d)>>32;           \\\n        ((uint8_t*)(p))[2] = (d)>>40;           \\\n        ((uint8_t*)(p))[1] = (d)>>48;           \\\n        ((uint8_t*)(p))[0] = (d)>>56;           \\\n    } while(0)\n#endif\n\n#ifndef AV_RL64\n#   define AV_RL64(x)                                   \\\n    (((uint64_t)((const uint8_t*)(x))[7] << 56) |       \\\n     ((uint64_t)((const uint8_t*)(x))[6] << 48) |       \\\n     ((uint64_t)((const uint8_t*)(x))[5] << 40) |       \\\n     ((uint64_t)((const uint8_t*)(x))[4] << 32) |       \\\n     ((uint64_t)((const uint8_t*)(x))[3] << 24) |       \\\n     ((uint64_t)((const uint8_t*)(x))[2] << 16) |       \\\n     ((uint64_t)((const uint8_t*)(x))[1] <<  8) |       \\\n      (uint64_t)((const uint8_t*)(x))[0])\n#endif\n#ifndef AV_WL64\n#   define AV_WL64(p, val) do {                 \\\n        uint64_t d = (val);                     \\\n        ((uint8_t*)(p))[0] = (d);               \\\n        ((uint8_t*)(p))[1] = (d)>>8;            \\\n        ((uint8_t*)(p))[2] = (d)>>16;           \\\n        ((uint8_t*)(p))[3] = (d)>>24;           \\\n        ((uint8_t*)(p))[4] = (d)>>32;           \\\n        ((uint8_t*)(p))[5] = (d)>>40;           \\\n        ((uint8_t*)(p))[6] = (d)>>48;           \\\n        ((uint8_t*)(p))[7] = (d)>>56;           \\\n    } while(0)\n#endif\n\n#if AV_HAVE_BIGENDIAN\n#   define AV_RN(s, p)    AV_RB##s(p)\n#   define AV_WN(s, p, v) AV_WB##s(p, v)\n#else\n#   define AV_RN(s, p)    AV_RL##s(p)\n#   define AV_WN(s, p, v) AV_WL##s(p, v)\n#endif\n\n#endif /* HAVE_FAST_UNALIGNED */\n\n#ifndef AV_RN16\n#   define AV_RN16(p) AV_RN(16, p)\n#endif\n\n#ifndef AV_RN32\n#   define AV_RN32(p) AV_RN(32, p)\n#endif\n\n#ifndef AV_RN64\n#   define AV_RN64(p) AV_RN(64, p)\n#endif\n\n#ifndef AV_WN16\n#   define AV_WN16(p, v) AV_WN(16, p, v)\n#endif\n\n#ifndef AV_WN32\n#   define AV_WN32(p, v) AV_WN(32, p, v)\n#endif\n\n#ifndef AV_WN64\n#   define AV_WN64(p, v) AV_WN(64, p, v)\n#endif\n\n#if AV_HAVE_BIGENDIAN\n#   define AV_RB(s, p)    AV_RN##s(p)\n#   define AV_WB(s, p, v) AV_WN##s(p, v)\n#   define AV_RL(s, p)    av_bswap##s(AV_RN##s(p))\n#   define AV_WL(s, p, v) AV_WN##s(p, av_bswap##s(v))\n#else\n#   define AV_RB(s, p)    av_bswap##s(AV_RN##s(p))\n#   define AV_WB(s, p, v) AV_WN##s(p, av_bswap##s(v))\n#   define AV_RL(s, p)    AV_RN##s(p)\n#   define AV_WL(s, p, v) AV_WN##s(p, v)\n#endif\n\n#define AV_RB8(x)     (((const uint8_t*)(x))[0])\n#define AV_WB8(p, d)  do { ((uint8_t*)(p))[0] = (d); } while(0)\n\n#define AV_RL8(x)     AV_RB8(x)\n#define AV_WL8(p, d)  AV_WB8(p, d)\n\n#ifndef AV_RB16\n#   define AV_RB16(p)    AV_RB(16, p)\n#endif\n#ifndef AV_WB16\n#   define AV_WB16(p, v) AV_WB(16, p, v)\n#endif\n\n#ifndef AV_RL16\n#   define AV_RL16(p)    AV_RL(16, p)\n#endif\n#ifndef AV_WL16\n#   define AV_WL16(p, v) AV_WL(16, p, v)\n#endif\n\n#ifndef AV_RB32\n#   define AV_RB32(p)    AV_RB(32, p)\n#endif\n#ifndef AV_WB32\n#   define AV_WB32(p, v) AV_WB(32, p, v)\n#endif\n\n#ifndef AV_RL32\n#   define AV_RL32(p)    AV_RL(32, p)\n#endif\n#ifndef AV_WL32\n#   define AV_WL32(p, v) AV_WL(32, p, v)\n#endif\n\n#ifndef AV_RB64\n#   define AV_RB64(p)    AV_RB(64, p)\n#endif\n#ifndef AV_WB64\n#   define AV_WB64(p, v) AV_WB(64, p, v)\n#endif\n\n#ifndef AV_RL64\n#   define AV_RL64(p)    AV_RL(64, p)\n#endif\n#ifndef AV_WL64\n#   define AV_WL64(p, v) AV_WL(64, p, v)\n#endif\n\n#ifndef AV_RB24\n#   define AV_RB24(x)                           \\\n    ((((const uint8_t*)(x))[0] << 16) |         \\\n     (((const uint8_t*)(x))[1] <<  8) |         \\\n      ((const uint8_t*)(x))[2])\n#endif\n#ifndef AV_WB24\n#   define AV_WB24(p, d) do {                   \\\n        ((uint8_t*)(p))[2] = (d);               \\\n        ((uint8_t*)(p))[1] = (d)>>8;            \\\n        ((uint8_t*)(p))[0] = (d)>>16;           \\\n    } while(0)\n#endif\n\n#ifndef AV_RL24\n#   define AV_RL24(x)                           \\\n    ((((const uint8_t*)(x))[2] << 16) |         \\\n     (((const uint8_t*)(x))[1] <<  8) |         \\\n      ((const uint8_t*)(x))[0])\n#endif\n#ifndef AV_WL24\n#   define AV_WL24(p, d) do {                   \\\n        ((uint8_t*)(p))[0] = (d);               \\\n        ((uint8_t*)(p))[1] = (d)>>8;            \\\n        ((uint8_t*)(p))[2] = (d)>>16;           \\\n    } while(0)\n#endif\n\n#ifndef AV_RB48\n#   define AV_RB48(x)                                     \\\n    (((uint64_t)((const uint8_t*)(x))[0] << 40) |         \\\n     ((uint64_t)((const uint8_t*)(x))[1] << 32) |         \\\n     ((uint64_t)((const uint8_t*)(x))[2] << 24) |         \\\n     ((uint64_t)((const uint8_t*)(x))[3] << 16) |         \\\n     ((uint64_t)((const uint8_t*)(x))[4] <<  8) |         \\\n      (uint64_t)((const uint8_t*)(x))[5])\n#endif\n#ifndef AV_WB48\n#   define AV_WB48(p, darg) do {                \\\n        uint64_t d = (darg);                    \\\n        ((uint8_t*)(p))[5] = (d);               \\\n        ((uint8_t*)(p))[4] = (d)>>8;            \\\n        ((uint8_t*)(p))[3] = (d)>>16;           \\\n        ((uint8_t*)(p))[2] = (d)>>24;           \\\n        ((uint8_t*)(p))[1] = (d)>>32;           \\\n        ((uint8_t*)(p))[0] = (d)>>40;           \\\n    } while(0)\n#endif\n\n#ifndef AV_RL48\n#   define AV_RL48(x)                                     \\\n    (((uint64_t)((const uint8_t*)(x))[5] << 40) |         \\\n     ((uint64_t)((const uint8_t*)(x))[4] << 32) |         \\\n     ((uint64_t)((const uint8_t*)(x))[3] << 24) |         \\\n     ((uint64_t)((const uint8_t*)(x))[2] << 16) |         \\\n     ((uint64_t)((const uint8_t*)(x))[1] <<  8) |         \\\n      (uint64_t)((const uint8_t*)(x))[0])\n#endif\n#ifndef AV_WL48\n#   define AV_WL48(p, darg) do {                \\\n        uint64_t d = (darg);                    \\\n        ((uint8_t*)(p))[0] = (d);               \\\n        ((uint8_t*)(p))[1] = (d)>>8;            \\\n        ((uint8_t*)(p))[2] = (d)>>16;           \\\n        ((uint8_t*)(p))[3] = (d)>>24;           \\\n        ((uint8_t*)(p))[4] = (d)>>32;           \\\n        ((uint8_t*)(p))[5] = (d)>>40;           \\\n    } while(0)\n#endif\n\n/*\n * The AV_[RW]NA macros access naturally aligned data\n * in a type-safe way.\n */\n\n#define AV_RNA(s, p)    (((const av_alias##s*)(p))->u##s)\n#define AV_WNA(s, p, v) (((av_alias##s*)(p))->u##s = (v))\n\n#ifndef AV_RN16A\n#   define AV_RN16A(p) AV_RNA(16, p)\n#endif\n\n#ifndef AV_RN32A\n#   define AV_RN32A(p) AV_RNA(32, p)\n#endif\n\n#ifndef AV_RN64A\n#   define AV_RN64A(p) AV_RNA(64, p)\n#endif\n\n#ifndef AV_WN16A\n#   define AV_WN16A(p, v) AV_WNA(16, p, v)\n#endif\n\n#ifndef AV_WN32A\n#   define AV_WN32A(p, v) AV_WNA(32, p, v)\n#endif\n\n#ifndef AV_WN64A\n#   define AV_WN64A(p, v) AV_WNA(64, p, v)\n#endif\n\n/*\n * The AV_COPYxxU macros are suitable for copying data to/from unaligned\n * memory locations.\n */\n\n#define AV_COPYU(n, d, s) AV_WN##n(d, AV_RN##n(s));\n\n#ifndef AV_COPY16U\n#   define AV_COPY16U(d, s) AV_COPYU(16, d, s)\n#endif\n\n#ifndef AV_COPY32U\n#   define AV_COPY32U(d, s) AV_COPYU(32, d, s)\n#endif\n\n#ifndef AV_COPY64U\n#   define AV_COPY64U(d, s) AV_COPYU(64, d, s)\n#endif\n\n#ifndef AV_COPY128U\n#   define AV_COPY128U(d, s)                                    \\\n    do {                                                        \\\n        AV_COPY64U(d, s);                                       \\\n        AV_COPY64U((char *)(d) + 8, (const char *)(s) + 8);     \\\n    } while(0)\n#endif\n\n/* Parameters for AV_COPY*, AV_SWAP*, AV_ZERO* must be\n * naturally aligned. They may be implemented using MMX,\n * so emms_c() must be called before using any float code\n * afterwards.\n */\n\n#define AV_COPY(n, d, s) \\\n    (((av_alias##n*)(d))->u##n = ((const av_alias##n*)(s))->u##n)\n\n#ifndef AV_COPY16\n#   define AV_COPY16(d, s) AV_COPY(16, d, s)\n#endif\n\n#ifndef AV_COPY32\n#   define AV_COPY32(d, s) AV_COPY(32, d, s)\n#endif\n\n#ifndef AV_COPY64\n#   define AV_COPY64(d, s) AV_COPY(64, d, s)\n#endif\n\n#ifndef AV_COPY128\n#   define AV_COPY128(d, s)                    \\\n    do {                                       \\\n        AV_COPY64(d, s);                       \\\n        AV_COPY64((char*)(d)+8, (char*)(s)+8); \\\n    } while(0)\n#endif\n\n#define AV_SWAP(n, a, b) FFSWAP(av_alias##n, *(av_alias##n*)(a), *(av_alias##n*)(b))\n\n#ifndef AV_SWAP64\n#   define AV_SWAP64(a, b) AV_SWAP(64, a, b)\n#endif\n\n#define AV_ZERO(n, d) (((av_alias##n*)(d))->u##n = 0)\n\n#ifndef AV_ZERO16\n#   define AV_ZERO16(d) AV_ZERO(16, d)\n#endif\n\n#ifndef AV_ZERO32\n#   define AV_ZERO32(d) AV_ZERO(32, d)\n#endif\n\n#ifndef AV_ZERO64\n#   define AV_ZERO64(d) AV_ZERO(64, d)\n#endif\n\n#ifndef AV_ZERO128\n#   define AV_ZERO128(d)         \\\n    do {                         \\\n        AV_ZERO64(d);            \\\n        AV_ZERO64((char*)(d)+8); \\\n    } while(0)\n#endif\n\n#endif /* AVUTIL_INTREADWRITE_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/lfg.h",
    "content": "/*\n * Lagged Fibonacci PRNG\n * Copyright (c) 2008 Michael Niedermayer\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_LFG_H\n#define AVUTIL_LFG_H\n\n#include <stdint.h>\n\ntypedef struct AVLFG {\n    unsigned int state[64];\n    int index;\n} AVLFG;\n\nvoid av_lfg_init(AVLFG *c, unsigned int seed);\n\n/**\n * Seed the state of the ALFG using binary data.\n *\n * Return value: 0 on success, negative value (AVERROR) on failure.\n */\nint av_lfg_init_from_data(AVLFG *c, const uint8_t *data, unsigned int length);\n\n/**\n * Get the next random unsigned 32-bit number using an ALFG.\n *\n * Please also consider a simple LCG like state= state*1664525+1013904223,\n * it may be good enough and faster for your specific use case.\n */\nstatic inline unsigned int av_lfg_get(AVLFG *c){\n    c->state[c->index & 63] = c->state[(c->index-24) & 63] + c->state[(c->index-55) & 63];\n    return c->state[c->index++ & 63];\n}\n\n/**\n * Get the next random unsigned 32-bit number using a MLFG.\n *\n * Please also consider av_lfg_get() above, it is faster.\n */\nstatic inline unsigned int av_mlfg_get(AVLFG *c){\n    unsigned int a= c->state[(c->index-55) & 63];\n    unsigned int b= c->state[(c->index-24) & 63];\n    return c->state[c->index++ & 63] = 2*a*b+a+b;\n}\n\n/**\n * Get the next two numbers generated by a Box-Muller Gaussian\n * generator using the random numbers issued by lfg.\n *\n * @param out array where the two generated numbers are placed\n */\nvoid av_bmg_get(AVLFG *lfg, double out[2]);\n\n#endif /* AVUTIL_LFG_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/log.h",
    "content": "/*\n * copyright (c) 2006 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_LOG_H\n#define AVUTIL_LOG_H\n\n#include <stdarg.h>\n#include \"avutil.h\"\n#include \"attributes.h\"\n#include \"version.h\"\n\ntypedef enum {\n    AV_CLASS_CATEGORY_NA = 0,\n    AV_CLASS_CATEGORY_INPUT,\n    AV_CLASS_CATEGORY_OUTPUT,\n    AV_CLASS_CATEGORY_MUXER,\n    AV_CLASS_CATEGORY_DEMUXER,\n    AV_CLASS_CATEGORY_ENCODER,\n    AV_CLASS_CATEGORY_DECODER,\n    AV_CLASS_CATEGORY_FILTER,\n    AV_CLASS_CATEGORY_BITSTREAM_FILTER,\n    AV_CLASS_CATEGORY_SWSCALER,\n    AV_CLASS_CATEGORY_SWRESAMPLER,\n    AV_CLASS_CATEGORY_DEVICE_VIDEO_OUTPUT = 40,\n    AV_CLASS_CATEGORY_DEVICE_VIDEO_INPUT,\n    AV_CLASS_CATEGORY_DEVICE_AUDIO_OUTPUT,\n    AV_CLASS_CATEGORY_DEVICE_AUDIO_INPUT,\n    AV_CLASS_CATEGORY_DEVICE_OUTPUT,\n    AV_CLASS_CATEGORY_DEVICE_INPUT,\n    AV_CLASS_CATEGORY_NB  ///< not part of ABI/API\n}AVClassCategory;\n\n#define AV_IS_INPUT_DEVICE(category) \\\n    (((category) == AV_CLASS_CATEGORY_DEVICE_VIDEO_INPUT) || \\\n     ((category) == AV_CLASS_CATEGORY_DEVICE_AUDIO_INPUT) || \\\n     ((category) == AV_CLASS_CATEGORY_DEVICE_INPUT))\n\n#define AV_IS_OUTPUT_DEVICE(category) \\\n    (((category) == AV_CLASS_CATEGORY_DEVICE_VIDEO_OUTPUT) || \\\n     ((category) == AV_CLASS_CATEGORY_DEVICE_AUDIO_OUTPUT) || \\\n     ((category) == AV_CLASS_CATEGORY_DEVICE_OUTPUT))\n\nstruct AVOptionRanges;\n\n/**\n * Describe the class of an AVClass context structure. That is an\n * arbitrary struct of which the first field is a pointer to an\n * AVClass struct (e.g. AVCodecContext, AVFormatContext etc.).\n */\ntypedef struct AVClass {\n    /**\n     * The name of the class; usually it is the same name as the\n     * context structure type to which the AVClass is associated.\n     */\n    const char* class_name;\n\n    /**\n     * A pointer to a function which returns the name of a context\n     * instance ctx associated with the class.\n     */\n    const char* (*item_name)(void* ctx);\n\n    /**\n     * a pointer to the first option specified in the class if any or NULL\n     *\n     * @see av_set_default_options()\n     */\n    const struct AVOption *option;\n\n    /**\n     * LIBAVUTIL_VERSION with which this structure was created.\n     * This is used to allow fields to be added without requiring major\n     * version bumps everywhere.\n     */\n\n    int version;\n\n    /**\n     * Offset in the structure where log_level_offset is stored.\n     * 0 means there is no such variable\n     */\n    int log_level_offset_offset;\n\n    /**\n     * Offset in the structure where a pointer to the parent context for\n     * logging is stored. For example a decoder could pass its AVCodecContext\n     * to eval as such a parent context, which an av_log() implementation\n     * could then leverage to display the parent context.\n     * The offset can be NULL.\n     */\n    int parent_log_context_offset;\n\n    /**\n     * Return next AVOptions-enabled child or NULL\n     */\n    void* (*child_next)(void *obj, void *prev);\n\n    /**\n     * Return an AVClass corresponding to the next potential\n     * AVOptions-enabled child.\n     *\n     * The difference between child_next and this is that\n     * child_next iterates over _already existing_ objects, while\n     * child_class_next iterates over _all possible_ children.\n     */\n    const struct AVClass* (*child_class_next)(const struct AVClass *prev);\n\n    /**\n     * Category used for visualization (like color)\n     * This is only set if the category is equal for all objects using this class.\n     * available since version (51 << 16 | 56 << 8 | 100)\n     */\n    AVClassCategory category;\n\n    /**\n     * Callback to return the category.\n     * available since version (51 << 16 | 59 << 8 | 100)\n     */\n    AVClassCategory (*get_category)(void* ctx);\n\n    /**\n     * Callback to return the supported/allowed ranges.\n     * available since version (52.12)\n     */\n    int (*query_ranges)(struct AVOptionRanges **, void *obj, const char *key, int flags);\n} AVClass;\n\n/**\n * @addtogroup lavu_log\n *\n * @{\n *\n * @defgroup lavu_log_constants Logging Constants\n *\n * @{\n */\n\n/**\n * Print no output.\n */\n#define AV_LOG_QUIET    -8\n\n/**\n * Something went really wrong and we will crash now.\n */\n#define AV_LOG_PANIC     0\n\n/**\n * Something went wrong and recovery is not possible.\n * For example, no header was found for a format which depends\n * on headers or an illegal combination of parameters is used.\n */\n#define AV_LOG_FATAL     8\n\n/**\n * Something went wrong and cannot losslessly be recovered.\n * However, not all future data is affected.\n */\n#define AV_LOG_ERROR    16\n\n/**\n * Something somehow does not look correct. This may or may not\n * lead to problems. An example would be the use of '-vstrict -2'.\n */\n#define AV_LOG_WARNING  24\n\n/**\n * Standard information.\n */\n#define AV_LOG_INFO     32\n\n/**\n * Detailed information.\n */\n#define AV_LOG_VERBOSE  40\n\n/**\n * Stuff which is only useful for libav* developers.\n */\n#define AV_LOG_DEBUG    48\n\n/**\n * Extremely verbose debugging, useful for libav* development.\n */\n#define AV_LOG_TRACE    56\n\n#define AV_LOG_MAX_OFFSET (AV_LOG_TRACE - AV_LOG_QUIET)\n\n/**\n * @}\n */\n\n/**\n * Sets additional colors for extended debugging sessions.\n * @code\n   av_log(ctx, AV_LOG_DEBUG|AV_LOG_C(134), \"Message in purple\\n\");\n   @endcode\n * Requires 256color terminal support. Uses outside debugging is not\n * recommended.\n */\n#define AV_LOG_C(x) ((x) << 8)\n\n/**\n * Send the specified message to the log if the level is less than or equal\n * to the current av_log_level. By default, all logging messages are sent to\n * stderr. This behavior can be altered by setting a different logging callback\n * function.\n * @see av_log_set_callback\n *\n * @param avcl A pointer to an arbitrary struct of which the first field is a\n *        pointer to an AVClass struct or NULL if general log.\n * @param level The importance level of the message expressed using a @ref\n *        lavu_log_constants \"Logging Constant\".\n * @param fmt The format string (printf-compatible) that specifies how\n *        subsequent arguments are converted to output.\n */\nvoid av_log(void *avcl, int level, const char *fmt, ...) av_printf_format(3, 4);\n\n\n/**\n * Send the specified message to the log if the level is less than or equal\n * to the current av_log_level. By default, all logging messages are sent to\n * stderr. This behavior can be altered by setting a different logging callback\n * function.\n * @see av_log_set_callback\n *\n * @param avcl A pointer to an arbitrary struct of which the first field is a\n *        pointer to an AVClass struct.\n * @param level The importance level of the message expressed using a @ref\n *        lavu_log_constants \"Logging Constant\".\n * @param fmt The format string (printf-compatible) that specifies how\n *        subsequent arguments are converted to output.\n * @param vl The arguments referenced by the format string.\n */\nvoid av_vlog(void *avcl, int level, const char *fmt, va_list vl);\n\n/**\n * Get the current log level\n *\n * @see lavu_log_constants\n *\n * @return Current log level\n */\nint av_log_get_level(void);\n\n/**\n * Set the log level\n *\n * @see lavu_log_constants\n *\n * @param level Logging level\n */\nvoid av_log_set_level(int level);\n\n/**\n * Set the logging callback\n *\n * @note The callback must be thread safe, even if the application does not use\n *       threads itself as some codecs are multithreaded.\n *\n * @see av_log_default_callback\n *\n * @param callback A logging function with a compatible signature.\n */\nvoid av_log_set_callback(void (*callback)(void*, int, const char*, va_list));\n\n/**\n * Default logging callback\n *\n * It prints the message to stderr, optionally colorizing it.\n *\n * @param avcl A pointer to an arbitrary struct of which the first field is a\n *        pointer to an AVClass struct.\n * @param level The importance level of the message expressed using a @ref\n *        lavu_log_constants \"Logging Constant\".\n * @param fmt The format string (printf-compatible) that specifies how\n *        subsequent arguments are converted to output.\n * @param vl The arguments referenced by the format string.\n */\nvoid av_log_default_callback(void *avcl, int level, const char *fmt,\n                             va_list vl);\n\n/**\n * Return the context name\n *\n * @param  ctx The AVClass context\n *\n * @return The AVClass class_name\n */\nconst char* av_default_item_name(void* ctx);\nAVClassCategory av_default_get_category(void *ptr);\n\n/**\n * Format a line of log the same way as the default callback.\n * @param line          buffer to receive the formatted line\n * @param line_size     size of the buffer\n * @param print_prefix  used to store whether the prefix must be printed;\n *                      must point to a persistent integer initially set to 1\n */\nvoid av_log_format_line(void *ptr, int level, const char *fmt, va_list vl,\n                        char *line, int line_size, int *print_prefix);\n\n/**\n * Format a line of log the same way as the default callback.\n * @param line          buffer to receive the formatted line;\n *                      may be NULL if line_size is 0\n * @param line_size     size of the buffer; at most line_size-1 characters will\n *                      be written to the buffer, plus one null terminator\n * @param print_prefix  used to store whether the prefix must be printed;\n *                      must point to a persistent integer initially set to 1\n * @return Returns a negative value if an error occurred, otherwise returns\n *         the number of characters that would have been written for a\n *         sufficiently large buffer, not including the terminating null\n *         character. If the return value is not less than line_size, it means\n *         that the log message was truncated to fit the buffer.\n */\nint av_log_format_line2(void *ptr, int level, const char *fmt, va_list vl,\n                        char *line, int line_size, int *print_prefix);\n\n/**\n * Skip repeated messages, this requires the user app to use av_log() instead of\n * (f)printf as the 2 would otherwise interfere and lead to\n * \"Last message repeated x times\" messages below (f)printf messages with some\n * bad luck.\n * Also to receive the last, \"last repeated\" line if any, the user app must\n * call av_log(NULL, AV_LOG_QUIET, \"%s\", \"\"); at the end\n */\n#define AV_LOG_SKIP_REPEATED 1\n\n/**\n * Include the log severity in messages originating from codecs.\n *\n * Results in messages such as:\n * [rawvideo @ 0xDEADBEEF] [error] encode did not produce valid pts\n */\n#define AV_LOG_PRINT_LEVEL 2\n\nvoid av_log_set_flags(int arg);\nint av_log_get_flags(void);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_LOG_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/lzo.h",
    "content": "/*\n * LZO 1x decompression\n * copyright (c) 2006 Reimar Doeffinger\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_LZO_H\n#define AVUTIL_LZO_H\n\n/**\n * @defgroup lavu_lzo LZO\n * @ingroup lavu_crypto\n *\n * @{\n */\n\n#include <stdint.h>\n\n/** @name Error flags returned by av_lzo1x_decode\n * @{ */\n/// end of the input buffer reached before decoding finished\n#define AV_LZO_INPUT_DEPLETED  1\n/// decoded data did not fit into output buffer\n#define AV_LZO_OUTPUT_FULL     2\n/// a reference to previously decoded data was wrong\n#define AV_LZO_INVALID_BACKPTR 4\n/// a non-specific error in the compressed bitstream\n#define AV_LZO_ERROR           8\n/** @} */\n\n#define AV_LZO_INPUT_PADDING   8\n#define AV_LZO_OUTPUT_PADDING 12\n\n/**\n * @brief Decodes LZO 1x compressed data.\n * @param out output buffer\n * @param outlen size of output buffer, number of bytes left are returned here\n * @param in input buffer\n * @param inlen size of input buffer, number of bytes left are returned here\n * @return 0 on success, otherwise a combination of the error flags above\n *\n * Make sure all buffers are appropriately padded, in must provide\n * AV_LZO_INPUT_PADDING, out must provide AV_LZO_OUTPUT_PADDING additional bytes.\n */\nint av_lzo1x_decode(void *out, int *outlen, const void *in, int *inlen);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_LZO_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/macros.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * @ingroup lavu\n * Utility Preprocessor macros\n */\n\n#ifndef AVUTIL_MACROS_H\n#define AVUTIL_MACROS_H\n\n/**\n * @addtogroup preproc_misc Preprocessor String Macros\n *\n * String manipulation macros\n *\n * @{\n */\n\n#define AV_STRINGIFY(s)         AV_TOSTRING(s)\n#define AV_TOSTRING(s) #s\n\n#define AV_GLUE(a, b) a ## b\n#define AV_JOIN(a, b) AV_GLUE(a, b)\n\n/**\n * @}\n */\n\n#define AV_PRAGMA(s) _Pragma(#s)\n\n#define FFALIGN(x, a) (((x)+(a)-1)&~((a)-1))\n\n#endif /* AVUTIL_MACROS_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/mastering_display_metadata.h",
    "content": "/*\n * Copyright (c) 2016 Neil Birkbeck <neil.birkbeck@gmail.com>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_MASTERING_DISPLAY_METADATA_H\n#define AVUTIL_MASTERING_DISPLAY_METADATA_H\n\n#include \"frame.h\"\n#include \"rational.h\"\n\n\n/**\n * Mastering display metadata capable of representing the color volume of\n * the display used to master the content (SMPTE 2086:2014).\n *\n * To be used as payload of a AVFrameSideData or AVPacketSideData with the\n * appropriate type.\n *\n * @note The struct should be allocated with av_mastering_display_metadata_alloc()\n *       and its size is not a part of the public ABI.\n */\ntypedef struct AVMasteringDisplayMetadata {\n    /**\n     * CIE 1931 xy chromaticity coords of color primaries (r, g, b order).\n     */\n    AVRational display_primaries[3][2];\n\n    /**\n     * CIE 1931 xy chromaticity coords of white point.\n     */\n    AVRational white_point[2];\n\n    /**\n     * Min luminance of mastering display (cd/m^2).\n     */\n    AVRational min_luminance;\n\n    /**\n     * Max luminance of mastering display (cd/m^2).\n     */\n    AVRational max_luminance;\n\n    /**\n     * Flag indicating whether the display primaries (and white point) are set.\n     */\n    int has_primaries;\n\n    /**\n     * Flag indicating whether the luminance (min_ and max_) have been set.\n     */\n    int has_luminance;\n\n} AVMasteringDisplayMetadata;\n\n/**\n * Allocate an AVMasteringDisplayMetadata structure and set its fields to\n * default values. The resulting struct can be freed using av_freep().\n *\n * @return An AVMasteringDisplayMetadata filled with default values or NULL\n *         on failure.\n */\nAVMasteringDisplayMetadata *av_mastering_display_metadata_alloc(void);\n\n/**\n * Allocate a complete AVMasteringDisplayMetadata and add it to the frame.\n *\n * @param frame The frame which side data is added to.\n *\n * @return The AVMasteringDisplayMetadata structure to be filled by caller.\n */\nAVMasteringDisplayMetadata *av_mastering_display_metadata_create_side_data(AVFrame *frame);\n\n/**\n * Content light level needed by to transmit HDR over HDMI (CTA-861.3).\n *\n * To be used as payload of a AVFrameSideData or AVPacketSideData with the\n * appropriate type.\n *\n * @note The struct should be allocated with av_content_light_metadata_alloc()\n *       and its size is not a part of the public ABI.\n */\ntypedef struct AVContentLightMetadata {\n    /**\n     * Max content light level (cd/m^2).\n     */\n    unsigned MaxCLL;\n\n    /**\n     * Max average light level per frame (cd/m^2).\n     */\n    unsigned MaxFALL;\n} AVContentLightMetadata;\n\n/**\n * Allocate an AVContentLightMetadata structure and set its fields to\n * default values. The resulting struct can be freed using av_freep().\n *\n * @return An AVContentLightMetadata filled with default values or NULL\n *         on failure.\n */\nAVContentLightMetadata *av_content_light_metadata_alloc(size_t *size);\n\n/**\n * Allocate a complete AVContentLightMetadata and add it to the frame.\n *\n * @param frame The frame which side data is added to.\n *\n * @return The AVContentLightMetadata structure to be filled by caller.\n */\nAVContentLightMetadata *av_content_light_metadata_create_side_data(AVFrame *frame);\n\n#endif /* AVUTIL_MASTERING_DISPLAY_METADATA_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/mathematics.h",
    "content": "/*\n * copyright (c) 2005-2012 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * @addtogroup lavu_math\n * Mathematical utilities for working with timestamp and time base.\n */\n\n#ifndef AVUTIL_MATHEMATICS_H\n#define AVUTIL_MATHEMATICS_H\n\n#include <stdint.h>\n#include <math.h>\n#include \"attributes.h\"\n#include \"rational.h\"\n#include \"intfloat.h\"\n\n#ifndef M_E\n#define M_E            2.7182818284590452354   /* e */\n#endif\n#ifndef M_LN2\n#define M_LN2          0.69314718055994530942  /* log_e 2 */\n#endif\n#ifndef M_LN10\n#define M_LN10         2.30258509299404568402  /* log_e 10 */\n#endif\n#ifndef M_LOG2_10\n#define M_LOG2_10      3.32192809488736234787  /* log_2 10 */\n#endif\n#ifndef M_PHI\n#define M_PHI          1.61803398874989484820   /* phi / golden ratio */\n#endif\n#ifndef M_PI\n#define M_PI           3.14159265358979323846  /* pi */\n#endif\n#ifndef M_PI_2\n#define M_PI_2         1.57079632679489661923  /* pi/2 */\n#endif\n#ifndef M_SQRT1_2\n#define M_SQRT1_2      0.70710678118654752440  /* 1/sqrt(2) */\n#endif\n#ifndef M_SQRT2\n#define M_SQRT2        1.41421356237309504880  /* sqrt(2) */\n#endif\n#ifndef NAN\n#define NAN            av_int2float(0x7fc00000)\n#endif\n#ifndef INFINITY\n#define INFINITY       av_int2float(0x7f800000)\n#endif\n\n/**\n * @addtogroup lavu_math\n *\n * @{\n */\n\n/**\n * Rounding methods.\n */\nenum AVRounding {\n    AV_ROUND_ZERO     = 0, ///< Round toward zero.\n    AV_ROUND_INF      = 1, ///< Round away from zero.\n    AV_ROUND_DOWN     = 2, ///< Round toward -infinity.\n    AV_ROUND_UP       = 3, ///< Round toward +infinity.\n    AV_ROUND_NEAR_INF = 5, ///< Round to nearest and halfway cases away from zero.\n    /**\n     * Flag telling rescaling functions to pass `INT64_MIN`/`MAX` through\n     * unchanged, avoiding special cases for #AV_NOPTS_VALUE.\n     *\n     * Unlike other values of the enumeration AVRounding, this value is a\n     * bitmask that must be used in conjunction with another value of the\n     * enumeration through a bitwise OR, in order to set behavior for normal\n     * cases.\n     *\n     * @code{.c}\n     * av_rescale_rnd(3, 1, 2, AV_ROUND_UP | AV_ROUND_PASS_MINMAX);\n     * // Rescaling 3:\n     * //     Calculating 3 * 1 / 2\n     * //     3 / 2 is rounded up to 2\n     * //     => 2\n     *\n     * av_rescale_rnd(AV_NOPTS_VALUE, 1, 2, AV_ROUND_UP | AV_ROUND_PASS_MINMAX);\n     * // Rescaling AV_NOPTS_VALUE:\n     * //     AV_NOPTS_VALUE == INT64_MIN\n     * //     AV_NOPTS_VALUE is passed through\n     * //     => AV_NOPTS_VALUE\n     * @endcode\n     */\n    AV_ROUND_PASS_MINMAX = 8192,\n};\n\n/**\n * Compute the greatest common divisor of two integer operands.\n *\n * @param a,b Operands\n * @return GCD of a and b up to sign; if a >= 0 and b >= 0, return value is >= 0;\n * if a == 0 and b == 0, returns 0.\n */\nint64_t av_const av_gcd(int64_t a, int64_t b);\n\n/**\n * Rescale a 64-bit integer with rounding to nearest.\n *\n * The operation is mathematically equivalent to `a * b / c`, but writing that\n * directly can overflow.\n *\n * This function is equivalent to av_rescale_rnd() with #AV_ROUND_NEAR_INF.\n *\n * @see av_rescale_rnd(), av_rescale_q(), av_rescale_q_rnd()\n */\nint64_t av_rescale(int64_t a, int64_t b, int64_t c) av_const;\n\n/**\n * Rescale a 64-bit integer with specified rounding.\n *\n * The operation is mathematically equivalent to `a * b / c`, but writing that\n * directly can overflow, and does not support different rounding methods.\n *\n * @see av_rescale(), av_rescale_q(), av_rescale_q_rnd()\n */\nint64_t av_rescale_rnd(int64_t a, int64_t b, int64_t c, enum AVRounding rnd) av_const;\n\n/**\n * Rescale a 64-bit integer by 2 rational numbers.\n *\n * The operation is mathematically equivalent to `a * bq / cq`.\n *\n * This function is equivalent to av_rescale_q_rnd() with #AV_ROUND_NEAR_INF.\n *\n * @see av_rescale(), av_rescale_rnd(), av_rescale_q_rnd()\n */\nint64_t av_rescale_q(int64_t a, AVRational bq, AVRational cq) av_const;\n\n/**\n * Rescale a 64-bit integer by 2 rational numbers with specified rounding.\n *\n * The operation is mathematically equivalent to `a * bq / cq`.\n *\n * @see av_rescale(), av_rescale_rnd(), av_rescale_q()\n */\nint64_t av_rescale_q_rnd(int64_t a, AVRational bq, AVRational cq,\n                         enum AVRounding rnd) av_const;\n\n/**\n * Compare two timestamps each in its own time base.\n *\n * @return One of the following values:\n *         - -1 if `ts_a` is before `ts_b`\n *         - 1 if `ts_a` is after `ts_b`\n *         - 0 if they represent the same position\n *\n * @warning\n * The result of the function is undefined if one of the timestamps is outside\n * the `int64_t` range when represented in the other's timebase.\n */\nint av_compare_ts(int64_t ts_a, AVRational tb_a, int64_t ts_b, AVRational tb_b);\n\n/**\n * Compare the remainders of two integer operands divided by a common divisor.\n *\n * In other words, compare the least significant `log2(mod)` bits of integers\n * `a` and `b`.\n *\n * @code{.c}\n * av_compare_mod(0x11, 0x02, 0x10) < 0 // since 0x11 % 0x10  (0x1) < 0x02 % 0x10  (0x2)\n * av_compare_mod(0x11, 0x02, 0x20) > 0 // since 0x11 % 0x20 (0x11) > 0x02 % 0x20 (0x02)\n * @endcode\n *\n * @param a,b Operands\n * @param mod Divisor; must be a power of 2\n * @return\n *         - a negative value if `a % mod < b % mod`\n *         - a positive value if `a % mod > b % mod`\n *         - zero             if `a % mod == b % mod`\n */\nint64_t av_compare_mod(uint64_t a, uint64_t b, uint64_t mod);\n\n/**\n * Rescale a timestamp while preserving known durations.\n *\n * This function is designed to be called per audio packet to scale the input\n * timestamp to a different time base. Compared to a simple av_rescale_q()\n * call, this function is robust against possible inconsistent frame durations.\n *\n * The `last` parameter is a state variable that must be preserved for all\n * subsequent calls for the same stream. For the first call, `*last` should be\n * initialized to #AV_NOPTS_VALUE.\n *\n * @param[in]     in_tb    Input time base\n * @param[in]     in_ts    Input timestamp\n * @param[in]     fs_tb    Duration time base; typically this is finer-grained\n *                         (greater) than `in_tb` and `out_tb`\n * @param[in]     duration Duration till the next call to this function (i.e.\n *                         duration of the current packet/frame)\n * @param[in,out] last     Pointer to a timestamp expressed in terms of\n *                         `fs_tb`, acting as a state variable\n * @param[in]     out_tb   Output timebase\n * @return        Timestamp expressed in terms of `out_tb`\n *\n * @note In the context of this function, \"duration\" is in term of samples, not\n *       seconds.\n */\nint64_t av_rescale_delta(AVRational in_tb, int64_t in_ts,  AVRational fs_tb, int duration, int64_t *last, AVRational out_tb);\n\n/**\n * Add a value to a timestamp.\n *\n * This function guarantees that when the same value is repeatly added that\n * no accumulation of rounding errors occurs.\n *\n * @param[in] ts     Input timestamp\n * @param[in] ts_tb  Input timestamp time base\n * @param[in] inc    Value to be added\n * @param[in] inc_tb Time base of `inc`\n */\nint64_t av_add_stable(AVRational ts_tb, int64_t ts, AVRational inc_tb, int64_t inc);\n\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_MATHEMATICS_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/md5.h",
    "content": "/*\n * copyright (c) 2006 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * @ingroup lavu_md5\n * Public header for MD5 hash function implementation.\n */\n\n#ifndef AVUTIL_MD5_H\n#define AVUTIL_MD5_H\n\n#include <stddef.h>\n#include <stdint.h>\n\n#include \"attributes.h\"\n#include \"version.h\"\n\n/**\n * @defgroup lavu_md5 MD5\n * @ingroup lavu_hash\n * MD5 hash function implementation.\n *\n * @{\n */\n\nextern const int av_md5_size;\n\nstruct AVMD5;\n\n/**\n * Allocate an AVMD5 context.\n */\nstruct AVMD5 *av_md5_alloc(void);\n\n/**\n * Initialize MD5 hashing.\n *\n * @param ctx pointer to the function context (of size av_md5_size)\n */\nvoid av_md5_init(struct AVMD5 *ctx);\n\n/**\n * Update hash value.\n *\n * @param ctx hash function context\n * @param src input data to update hash with\n * @param len input data length\n */\n#if FF_API_CRYPTO_SIZE_T\nvoid av_md5_update(struct AVMD5 *ctx, const uint8_t *src, int len);\n#else\nvoid av_md5_update(struct AVMD5 *ctx, const uint8_t *src, size_t len);\n#endif\n\n/**\n * Finish hashing and output digest value.\n *\n * @param ctx hash function context\n * @param dst buffer where output digest value is stored\n */\nvoid av_md5_final(struct AVMD5 *ctx, uint8_t *dst);\n\n/**\n * Hash an array of data.\n *\n * @param dst The output buffer to write the digest into\n * @param src The data to hash\n * @param len The length of the data, in bytes\n */\n#if FF_API_CRYPTO_SIZE_T\nvoid av_md5_sum(uint8_t *dst, const uint8_t *src, const int len);\n#else\nvoid av_md5_sum(uint8_t *dst, const uint8_t *src, size_t len);\n#endif\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_MD5_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/mem.h",
    "content": "/*\n * copyright (c) 2006 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * @ingroup lavu_mem\n * Memory handling functions\n */\n\n#ifndef AVUTIL_MEM_H\n#define AVUTIL_MEM_H\n\n#include <limits.h>\n#include <stdint.h>\n\n#include \"attributes.h\"\n#include \"error.h\"\n#include \"avutil.h\"\n\n/**\n * @addtogroup lavu_mem\n * Utilities for manipulating memory.\n *\n * FFmpeg has several applications of memory that are not required of a typical\n * program. For example, the computing-heavy components like video decoding and\n * encoding can be sped up significantly through the use of aligned memory.\n *\n * However, for each of FFmpeg's applications of memory, there might not be a\n * recognized or standardized API for that specific use. Memory alignment, for\n * instance, varies wildly depending on operating systems, architectures, and\n * compilers. Hence, this component of @ref libavutil is created to make\n * dealing with memory consistently possible on all platforms.\n *\n * @{\n *\n * @defgroup lavu_mem_macros Alignment Macros\n * Helper macros for declaring aligned variables.\n * @{\n */\n\n/**\n * @def DECLARE_ALIGNED(n,t,v)\n * Declare a variable that is aligned in memory.\n *\n * @code{.c}\n * DECLARE_ALIGNED(16, uint16_t, aligned_int) = 42;\n * DECLARE_ALIGNED(32, uint8_t, aligned_array)[128];\n *\n * // The default-alignment equivalent would be\n * uint16_t aligned_int = 42;\n * uint8_t aligned_array[128];\n * @endcode\n *\n * @param n Minimum alignment in bytes\n * @param t Type of the variable (or array element)\n * @param v Name of the variable\n */\n\n/**\n * @def DECLARE_ASM_ALIGNED(n,t,v)\n * Declare an aligned variable appropriate for use in inline assembly code.\n *\n * @code{.c}\n * DECLARE_ASM_ALIGNED(16, uint64_t, pw_08) = UINT64_C(0x0008000800080008);\n * @endcode\n *\n * @param n Minimum alignment in bytes\n * @param t Type of the variable (or array element)\n * @param v Name of the variable\n */\n\n/**\n * @def DECLARE_ASM_CONST(n,t,v)\n * Declare a static constant aligned variable appropriate for use in inline\n * assembly code.\n *\n * @code{.c}\n * DECLARE_ASM_CONST(16, uint64_t, pw_08) = UINT64_C(0x0008000800080008);\n * @endcode\n *\n * @param n Minimum alignment in bytes\n * @param t Type of the variable (or array element)\n * @param v Name of the variable\n */\n\n#if defined(__INTEL_COMPILER) && __INTEL_COMPILER < 1110 || defined(__SUNPRO_C)\n    #define DECLARE_ALIGNED(n,t,v)      t __attribute__ ((aligned (n))) v\n    #define DECLARE_ASM_ALIGNED(n,t,v)  t __attribute__ ((aligned (n))) v\n    #define DECLARE_ASM_CONST(n,t,v)    const t __attribute__ ((aligned (n))) v\n#elif defined(__DJGPP__)\n    #define DECLARE_ALIGNED(n,t,v)      t __attribute__ ((aligned (FFMIN(n, 16)))) v\n    #define DECLARE_ASM_ALIGNED(n,t,v)  t av_used __attribute__ ((aligned (FFMIN(n, 16)))) v\n    #define DECLARE_ASM_CONST(n,t,v)    static const t av_used __attribute__ ((aligned (FFMIN(n, 16)))) v\n#elif defined(__GNUC__) || defined(__clang__)\n    #define DECLARE_ALIGNED(n,t,v)      t __attribute__ ((aligned (n))) v\n    #define DECLARE_ASM_ALIGNED(n,t,v)  t av_used __attribute__ ((aligned (n))) v\n    #define DECLARE_ASM_CONST(n,t,v)    static const t av_used __attribute__ ((aligned (n))) v\n#elif defined(_MSC_VER)\n    #define DECLARE_ALIGNED(n,t,v)      __declspec(align(n)) t v\n    #define DECLARE_ASM_ALIGNED(n,t,v)  __declspec(align(n)) t v\n    #define DECLARE_ASM_CONST(n,t,v)    __declspec(align(n)) static const t v\n#else\n    #define DECLARE_ALIGNED(n,t,v)      t v\n    #define DECLARE_ASM_ALIGNED(n,t,v)  t v\n    #define DECLARE_ASM_CONST(n,t,v)    static const t v\n#endif\n\n/**\n * @}\n */\n\n/**\n * @defgroup lavu_mem_attrs Function Attributes\n * Function attributes applicable to memory handling functions.\n *\n * These function attributes can help compilers emit more useful warnings, or\n * generate better code.\n * @{\n */\n\n/**\n * @def av_malloc_attrib\n * Function attribute denoting a malloc-like function.\n *\n * @see <a href=\"https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-g_t_0040code_007bmalloc_007d-function-attribute-3251\">Function attribute `malloc` in GCC's documentation</a>\n */\n\n#if AV_GCC_VERSION_AT_LEAST(3,1)\n    #define av_malloc_attrib __attribute__((__malloc__))\n#else\n    #define av_malloc_attrib\n#endif\n\n/**\n * @def av_alloc_size(...)\n * Function attribute used on a function that allocates memory, whose size is\n * given by the specified parameter(s).\n *\n * @code{.c}\n * void *av_malloc(size_t size) av_alloc_size(1);\n * void *av_calloc(size_t nmemb, size_t size) av_alloc_size(1, 2);\n * @endcode\n *\n * @param ... One or two parameter indexes, separated by a comma\n *\n * @see <a href=\"https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-g_t_0040code_007balloc_005fsize_007d-function-attribute-3220\">Function attribute `alloc_size` in GCC's documentation</a>\n */\n\n#if AV_GCC_VERSION_AT_LEAST(4,3)\n    #define av_alloc_size(...) __attribute__((alloc_size(__VA_ARGS__)))\n#else\n    #define av_alloc_size(...)\n#endif\n\n/**\n * @}\n */\n\n/**\n * @defgroup lavu_mem_funcs Heap Management\n * Functions responsible for allocating, freeing, and copying memory.\n *\n * All memory allocation functions have a built-in upper limit of `INT_MAX`\n * bytes. This may be changed with av_max_alloc(), although exercise extreme\n * caution when doing so.\n *\n * @{\n */\n\n/**\n * Allocate a memory block with alignment suitable for all memory accesses\n * (including vectors if available on the CPU).\n *\n * @param size Size in bytes for the memory block to be allocated\n * @return Pointer to the allocated block, or `NULL` if the block cannot\n *         be allocated\n * @see av_mallocz()\n */\nvoid *av_malloc(size_t size) av_malloc_attrib av_alloc_size(1);\n\n/**\n * Allocate a memory block with alignment suitable for all memory accesses\n * (including vectors if available on the CPU) and zero all the bytes of the\n * block.\n *\n * @param size Size in bytes for the memory block to be allocated\n * @return Pointer to the allocated block, or `NULL` if it cannot be allocated\n * @see av_malloc()\n */\nvoid *av_mallocz(size_t size) av_malloc_attrib av_alloc_size(1);\n\n/**\n * Allocate a memory block for an array with av_malloc().\n *\n * The allocated memory will have size `size * nmemb` bytes.\n *\n * @param nmemb Number of element\n * @param size  Size of a single element\n * @return Pointer to the allocated block, or `NULL` if the block cannot\n *         be allocated\n * @see av_malloc()\n */\nav_alloc_size(1, 2) void *av_malloc_array(size_t nmemb, size_t size);\n\n/**\n * Allocate a memory block for an array with av_mallocz().\n *\n * The allocated memory will have size `size * nmemb` bytes.\n *\n * @param nmemb Number of elements\n * @param size  Size of the single element\n * @return Pointer to the allocated block, or `NULL` if the block cannot\n *         be allocated\n *\n * @see av_mallocz()\n * @see av_malloc_array()\n */\nav_alloc_size(1, 2) void *av_mallocz_array(size_t nmemb, size_t size);\n\n/**\n * Non-inlined equivalent of av_mallocz_array().\n *\n * Created for symmetry with the calloc() C function.\n */\nvoid *av_calloc(size_t nmemb, size_t size) av_malloc_attrib;\n\n/**\n * Allocate, reallocate, or free a block of memory.\n *\n * If `ptr` is `NULL` and `size` > 0, allocate a new block. If `size` is\n * zero, free the memory block pointed to by `ptr`. Otherwise, expand or\n * shrink that block of memory according to `size`.\n *\n * @param ptr  Pointer to a memory block already allocated with\n *             av_realloc() or `NULL`\n * @param size Size in bytes of the memory block to be allocated or\n *             reallocated\n *\n * @return Pointer to a newly-reallocated block or `NULL` if the block\n *         cannot be reallocated or the function is used to free the memory block\n *\n * @warning Unlike av_malloc(), the returned pointer is not guaranteed to be\n *          correctly aligned.\n * @see av_fast_realloc()\n * @see av_reallocp()\n */\nvoid *av_realloc(void *ptr, size_t size) av_alloc_size(2);\n\n/**\n * Allocate, reallocate, or free a block of memory through a pointer to a\n * pointer.\n *\n * If `*ptr` is `NULL` and `size` > 0, allocate a new block. If `size` is\n * zero, free the memory block pointed to by `*ptr`. Otherwise, expand or\n * shrink that block of memory according to `size`.\n *\n * @param[in,out] ptr  Pointer to a pointer to a memory block already allocated\n *                     with av_realloc(), or a pointer to `NULL`. The pointer\n *                     is updated on success, or freed on failure.\n * @param[in]     size Size in bytes for the memory block to be allocated or\n *                     reallocated\n *\n * @return Zero on success, an AVERROR error code on failure\n *\n * @warning Unlike av_malloc(), the allocated memory is not guaranteed to be\n *          correctly aligned.\n */\nav_warn_unused_result\nint av_reallocp(void *ptr, size_t size);\n\n/**\n * Allocate, reallocate, or free a block of memory.\n *\n * This function does the same thing as av_realloc(), except:\n * - It takes two size arguments and allocates `nelem * elsize` bytes,\n *   after checking the result of the multiplication for integer overflow.\n * - It frees the input block in case of failure, thus avoiding the memory\n *   leak with the classic\n *   @code{.c}\n *   buf = realloc(buf);\n *   if (!buf)\n *       return -1;\n *   @endcode\n *   pattern.\n */\nvoid *av_realloc_f(void *ptr, size_t nelem, size_t elsize);\n\n/**\n * Allocate, reallocate, or free an array.\n *\n * If `ptr` is `NULL` and `nmemb` > 0, allocate a new block. If\n * `nmemb` is zero, free the memory block pointed to by `ptr`.\n *\n * @param ptr   Pointer to a memory block already allocated with\n *              av_realloc() or `NULL`\n * @param nmemb Number of elements in the array\n * @param size  Size of the single element of the array\n *\n * @return Pointer to a newly-reallocated block or NULL if the block\n *         cannot be reallocated or the function is used to free the memory block\n *\n * @warning Unlike av_malloc(), the allocated memory is not guaranteed to be\n *          correctly aligned.\n * @see av_reallocp_array()\n */\nav_alloc_size(2, 3) void *av_realloc_array(void *ptr, size_t nmemb, size_t size);\n\n/**\n * Allocate, reallocate, or free an array through a pointer to a pointer.\n *\n * If `*ptr` is `NULL` and `nmemb` > 0, allocate a new block. If `nmemb` is\n * zero, free the memory block pointed to by `*ptr`.\n *\n * @param[in,out] ptr   Pointer to a pointer to a memory block already\n *                      allocated with av_realloc(), or a pointer to `NULL`.\n *                      The pointer is updated on success, or freed on failure.\n * @param[in]     nmemb Number of elements\n * @param[in]     size  Size of the single element\n *\n * @return Zero on success, an AVERROR error code on failure\n *\n * @warning Unlike av_malloc(), the allocated memory is not guaranteed to be\n *          correctly aligned.\n */\nav_alloc_size(2, 3) int av_reallocp_array(void *ptr, size_t nmemb, size_t size);\n\n/**\n * Reallocate the given buffer if it is not large enough, otherwise do nothing.\n *\n * If the given buffer is `NULL`, then a new uninitialized buffer is allocated.\n *\n * If the given buffer is not large enough, and reallocation fails, `NULL` is\n * returned and `*size` is set to 0, but the original buffer is not changed or\n * freed.\n *\n * A typical use pattern follows:\n *\n * @code{.c}\n * uint8_t *buf = ...;\n * uint8_t *new_buf = av_fast_realloc(buf, &current_size, size_needed);\n * if (!new_buf) {\n *     // Allocation failed; clean up original buffer\n *     av_freep(&buf);\n *     return AVERROR(ENOMEM);\n * }\n * @endcode\n *\n * @param[in,out] ptr      Already allocated buffer, or `NULL`\n * @param[in,out] size     Pointer to current size of buffer `ptr`. `*size` is\n *                         changed to `min_size` in case of success or 0 in\n *                         case of failure\n * @param[in]     min_size New size of buffer `ptr`\n * @return `ptr` if the buffer is large enough, a pointer to newly reallocated\n *         buffer if the buffer was not large enough, or `NULL` in case of\n *         error\n * @see av_realloc()\n * @see av_fast_malloc()\n */\nvoid *av_fast_realloc(void *ptr, unsigned int *size, size_t min_size);\n\n/**\n * Allocate a buffer, reusing the given one if large enough.\n *\n * Contrary to av_fast_realloc(), the current buffer contents might not be\n * preserved and on error the old buffer is freed, thus no special handling to\n * avoid memleaks is necessary.\n *\n * `*ptr` is allowed to be `NULL`, in which case allocation always happens if\n * `size_needed` is greater than 0.\n *\n * @code{.c}\n * uint8_t *buf = ...;\n * av_fast_malloc(&buf, &current_size, size_needed);\n * if (!buf) {\n *     // Allocation failed; buf already freed\n *     return AVERROR(ENOMEM);\n * }\n * @endcode\n *\n * @param[in,out] ptr      Pointer to pointer to an already allocated buffer.\n *                         `*ptr` will be overwritten with pointer to new\n *                         buffer on success or `NULL` on failure\n * @param[in,out] size     Pointer to current size of buffer `*ptr`. `*size` is\n *                         changed to `min_size` in case of success or 0 in\n *                         case of failure\n * @param[in]     min_size New size of buffer `*ptr`\n * @see av_realloc()\n * @see av_fast_mallocz()\n */\nvoid av_fast_malloc(void *ptr, unsigned int *size, size_t min_size);\n\n/**\n * Allocate and clear a buffer, reusing the given one if large enough.\n *\n * Like av_fast_malloc(), but all newly allocated space is initially cleared.\n * Reused buffer is not cleared.\n *\n * `*ptr` is allowed to be `NULL`, in which case allocation always happens if\n * `size_needed` is greater than 0.\n *\n * @param[in,out] ptr      Pointer to pointer to an already allocated buffer.\n *                         `*ptr` will be overwritten with pointer to new\n *                         buffer on success or `NULL` on failure\n * @param[in,out] size     Pointer to current size of buffer `*ptr`. `*size` is\n *                         changed to `min_size` in case of success or 0 in\n *                         case of failure\n * @param[in]     min_size New size of buffer `*ptr`\n * @see av_fast_malloc()\n */\nvoid av_fast_mallocz(void *ptr, unsigned int *size, size_t min_size);\n\n/**\n * Free a memory block which has been allocated with a function of av_malloc()\n * or av_realloc() family.\n *\n * @param ptr Pointer to the memory block which should be freed.\n *\n * @note `ptr = NULL` is explicitly allowed.\n * @note It is recommended that you use av_freep() instead, to prevent leaving\n *       behind dangling pointers.\n * @see av_freep()\n */\nvoid av_free(void *ptr);\n\n/**\n * Free a memory block which has been allocated with a function of av_malloc()\n * or av_realloc() family, and set the pointer pointing to it to `NULL`.\n *\n * @code{.c}\n * uint8_t *buf = av_malloc(16);\n * av_free(buf);\n * // buf now contains a dangling pointer to freed memory, and accidental\n * // dereference of buf will result in a use-after-free, which may be a\n * // security risk.\n *\n * uint8_t *buf = av_malloc(16);\n * av_freep(&buf);\n * // buf is now NULL, and accidental dereference will only result in a\n * // NULL-pointer dereference.\n * @endcode\n *\n * @param ptr Pointer to the pointer to the memory block which should be freed\n * @note `*ptr = NULL` is safe and leads to no action.\n * @see av_free()\n */\nvoid av_freep(void *ptr);\n\n/**\n * Duplicate a string.\n *\n * @param s String to be duplicated\n * @return Pointer to a newly-allocated string containing a\n *         copy of `s` or `NULL` if the string cannot be allocated\n * @see av_strndup()\n */\nchar *av_strdup(const char *s) av_malloc_attrib;\n\n/**\n * Duplicate a substring of a string.\n *\n * @param s   String to be duplicated\n * @param len Maximum length of the resulting string (not counting the\n *            terminating byte)\n * @return Pointer to a newly-allocated string containing a\n *         substring of `s` or `NULL` if the string cannot be allocated\n */\nchar *av_strndup(const char *s, size_t len) av_malloc_attrib;\n\n/**\n * Duplicate a buffer with av_malloc().\n *\n * @param p    Buffer to be duplicated\n * @param size Size in bytes of the buffer copied\n * @return Pointer to a newly allocated buffer containing a\n *         copy of `p` or `NULL` if the buffer cannot be allocated\n */\nvoid *av_memdup(const void *p, size_t size);\n\n/**\n * Overlapping memcpy() implementation.\n *\n * @param dst  Destination buffer\n * @param back Number of bytes back to start copying (i.e. the initial size of\n *             the overlapping window); must be > 0\n * @param cnt  Number of bytes to copy; must be >= 0\n *\n * @note `cnt > back` is valid, this will copy the bytes we just copied,\n *       thus creating a repeating pattern with a period length of `back`.\n */\nvoid av_memcpy_backptr(uint8_t *dst, int back, int cnt);\n\n/**\n * @}\n */\n\n/**\n * @defgroup lavu_mem_dynarray Dynamic Array\n *\n * Utilities to make an array grow when needed.\n *\n * Sometimes, the programmer would want to have an array that can grow when\n * needed. The libavutil dynamic array utilities fill that need.\n *\n * libavutil supports two systems of appending elements onto a dynamically\n * allocated array, the first one storing the pointer to the value in the\n * array, and the second storing the value directly. In both systems, the\n * caller is responsible for maintaining a variable containing the length of\n * the array, as well as freeing of the array after use.\n *\n * The first system stores pointers to values in a block of dynamically\n * allocated memory. Since only pointers are stored, the function does not need\n * to know the size of the type. Both av_dynarray_add() and\n * av_dynarray_add_nofree() implement this system.\n *\n * @code\n * type **array = NULL; //< an array of pointers to values\n * int    nb    = 0;    //< a variable to keep track of the length of the array\n *\n * type to_be_added  = ...;\n * type to_be_added2 = ...;\n *\n * av_dynarray_add(&array, &nb, &to_be_added);\n * if (nb == 0)\n *     return AVERROR(ENOMEM);\n *\n * av_dynarray_add(&array, &nb, &to_be_added2);\n * if (nb == 0)\n *     return AVERROR(ENOMEM);\n *\n * // Now:\n * //  nb           == 2\n * // &to_be_added  == array[0]\n * // &to_be_added2 == array[1]\n *\n * av_freep(&array);\n * @endcode\n *\n * The second system stores the value directly in a block of memory. As a\n * result, the function has to know the size of the type. av_dynarray2_add()\n * implements this mechanism.\n *\n * @code\n * type *array = NULL; //< an array of values\n * int   nb    = 0;    //< a variable to keep track of the length of the array\n *\n * type to_be_added  = ...;\n * type to_be_added2 = ...;\n *\n * type *addr = av_dynarray2_add((void **)&array, &nb, sizeof(*array), NULL);\n * if (!addr)\n *     return AVERROR(ENOMEM);\n * memcpy(addr, &to_be_added, sizeof(to_be_added));\n *\n * // Shortcut of the above.\n * type *addr = av_dynarray2_add((void **)&array, &nb, sizeof(*array),\n *                               (const void *)&to_be_added2);\n * if (!addr)\n *     return AVERROR(ENOMEM);\n *\n * // Now:\n * //  nb           == 2\n * //  to_be_added  == array[0]\n * //  to_be_added2 == array[1]\n *\n * av_freep(&array);\n * @endcode\n *\n * @{\n */\n\n/**\n * Add the pointer to an element to a dynamic array.\n *\n * The array to grow is supposed to be an array of pointers to\n * structures, and the element to add must be a pointer to an already\n * allocated structure.\n *\n * The array is reallocated when its size reaches powers of 2.\n * Therefore, the amortized cost of adding an element is constant.\n *\n * In case of success, the pointer to the array is updated in order to\n * point to the new grown array, and the number pointed to by `nb_ptr`\n * is incremented.\n * In case of failure, the array is freed, `*tab_ptr` is set to `NULL` and\n * `*nb_ptr` is set to 0.\n *\n * @param[in,out] tab_ptr Pointer to the array to grow\n * @param[in,out] nb_ptr  Pointer to the number of elements in the array\n * @param[in]     elem    Element to add\n * @see av_dynarray_add_nofree(), av_dynarray2_add()\n */\nvoid av_dynarray_add(void *tab_ptr, int *nb_ptr, void *elem);\n\n/**\n * Add an element to a dynamic array.\n *\n * Function has the same functionality as av_dynarray_add(),\n * but it doesn't free memory on fails. It returns error code\n * instead and leave current buffer untouched.\n *\n * @return >=0 on success, negative otherwise\n * @see av_dynarray_add(), av_dynarray2_add()\n */\nav_warn_unused_result\nint av_dynarray_add_nofree(void *tab_ptr, int *nb_ptr, void *elem);\n\n/**\n * Add an element of size `elem_size` to a dynamic array.\n *\n * The array is reallocated when its number of elements reaches powers of 2.\n * Therefore, the amortized cost of adding an element is constant.\n *\n * In case of success, the pointer to the array is updated in order to\n * point to the new grown array, and the number pointed to by `nb_ptr`\n * is incremented.\n * In case of failure, the array is freed, `*tab_ptr` is set to `NULL` and\n * `*nb_ptr` is set to 0.\n *\n * @param[in,out] tab_ptr   Pointer to the array to grow\n * @param[in,out] nb_ptr    Pointer to the number of elements in the array\n * @param[in]     elem_size Size in bytes of an element in the array\n * @param[in]     elem_data Pointer to the data of the element to add. If\n *                          `NULL`, the space of the newly added element is\n *                          allocated but left uninitialized.\n *\n * @return Pointer to the data of the element to copy in the newly allocated\n *         space\n * @see av_dynarray_add(), av_dynarray_add_nofree()\n */\nvoid *av_dynarray2_add(void **tab_ptr, int *nb_ptr, size_t elem_size,\n                       const uint8_t *elem_data);\n\n/**\n * @}\n */\n\n/**\n * @defgroup lavu_mem_misc Miscellaneous Functions\n *\n * Other functions related to memory allocation.\n *\n * @{\n */\n\n/**\n * Multiply two `size_t` values checking for overflow.\n *\n * @param[in]  a,b Operands of multiplication\n * @param[out] r   Pointer to the result of the operation\n * @return 0 on success, AVERROR(EINVAL) on overflow\n */\nstatic inline int av_size_mult(size_t a, size_t b, size_t *r)\n{\n    size_t t = a * b;\n    /* Hack inspired from glibc: don't try the division if nelem and elsize\n     * are both less than sqrt(SIZE_MAX). */\n    if ((a | b) >= ((size_t)1 << (sizeof(size_t) * 4)) && a && t / a != b)\n        return AVERROR(EINVAL);\n    *r = t;\n    return 0;\n}\n\n/**\n * Set the maximum size that may be allocated in one block.\n *\n * The value specified with this function is effective for all libavutil's @ref\n * lavu_mem_funcs \"heap management functions.\"\n *\n * By default, the max value is defined as `INT_MAX`.\n *\n * @param max Value to be set as the new maximum size\n *\n * @warning Exercise extreme caution when using this function. Don't touch\n *          this if you do not understand the full consequence of doing so.\n */\nvoid av_max_alloc(size_t max);\n\n/**\n * @}\n * @}\n */\n\n#endif /* AVUTIL_MEM_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/motion_vector.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_MOTION_VECTOR_H\n#define AVUTIL_MOTION_VECTOR_H\n\n#include <stdint.h>\n\ntypedef struct AVMotionVector {\n    /**\n     * Where the current macroblock comes from; negative value when it comes\n     * from the past, positive value when it comes from the future.\n     * XXX: set exact relative ref frame reference instead of a +/- 1 \"direction\".\n     */\n    int32_t source;\n    /**\n     * Width and height of the block.\n     */\n    uint8_t w, h;\n    /**\n     * Absolute source position. Can be outside the frame area.\n     */\n    int16_t src_x, src_y;\n    /**\n     * Absolute destination position. Can be outside the frame area.\n     */\n    int16_t dst_x, dst_y;\n    /**\n     * Extra flag information.\n     * Currently unused.\n     */\n    uint64_t flags;\n    /**\n     * Motion vector\n     * src_x = dst_x + motion_x / motion_scale\n     * src_y = dst_y + motion_y / motion_scale\n     */\n    int32_t motion_x, motion_y;\n    uint16_t motion_scale;\n} AVMotionVector;\n\n#endif /* AVUTIL_MOTION_VECTOR_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/murmur3.h",
    "content": "/*\n * Copyright (C) 2013 Reimar Döffinger <Reimar.Doeffinger@gmx.de>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * @ingroup lavu_murmur3\n * Public header for MurmurHash3 hash function implementation.\n */\n\n#ifndef AVUTIL_MURMUR3_H\n#define AVUTIL_MURMUR3_H\n\n#include <stdint.h>\n\n#include \"version.h\"\n\n/**\n * @defgroup lavu_murmur3 Murmur3\n * @ingroup lavu_hash\n * MurmurHash3 hash function implementation.\n *\n * MurmurHash3 is a non-cryptographic hash function, of which three\n * incompatible versions were created by its inventor Austin Appleby:\n *\n * - 32-bit output\n * - 128-bit output for 32-bit platforms\n * - 128-bit output for 64-bit platforms\n *\n * FFmpeg only implements the last variant: 128-bit output designed for 64-bit\n * platforms. Even though the hash function was designed for 64-bit platforms,\n * the function in reality works on 32-bit systems too, only with reduced\n * performance.\n *\n * @anchor lavu_murmur3_seedinfo\n * By design, MurmurHash3 requires a seed to operate. In response to this,\n * libavutil provides two functions for hash initiation, one that requires a\n * seed (av_murmur3_init_seeded()) and one that uses a fixed arbitrary integer\n * as the seed, and therefore does not (av_murmur3_init()).\n *\n * To make hashes comparable, you should provide the same seed for all calls to\n * this hash function -- if you are supplying one yourself, that is.\n *\n * @{\n */\n\n/**\n * Allocate an AVMurMur3 hash context.\n *\n * @return Uninitialized hash context or `NULL` in case of error\n */\nstruct AVMurMur3 *av_murmur3_alloc(void);\n\n/**\n * Initialize or reinitialize an AVMurMur3 hash context with a seed.\n *\n * @param[out] c    Hash context\n * @param[in]  seed Random seed\n *\n * @see av_murmur3_init()\n * @see @ref lavu_murmur3_seedinfo \"Detailed description\" on a discussion of\n * seeds for MurmurHash3.\n */\nvoid av_murmur3_init_seeded(struct AVMurMur3 *c, uint64_t seed);\n\n/**\n * Initialize or reinitialize an AVMurMur3 hash context.\n *\n * Equivalent to av_murmur3_init_seeded() with a built-in seed.\n *\n * @param[out] c    Hash context\n *\n * @see av_murmur3_init_seeded()\n * @see @ref lavu_murmur3_seedinfo \"Detailed description\" on a discussion of\n * seeds for MurmurHash3.\n */\nvoid av_murmur3_init(struct AVMurMur3 *c);\n\n/**\n * Update hash context with new data.\n *\n * @param[out] c    Hash context\n * @param[in]  src  Input data to update hash with\n * @param[in]  len  Number of bytes to read from `src`\n */\n#if FF_API_CRYPTO_SIZE_T\nvoid av_murmur3_update(struct AVMurMur3 *c, const uint8_t *src, int len);\n#else\nvoid av_murmur3_update(struct AVMurMur3 *c, const uint8_t *src, size_t len);\n#endif\n\n/**\n * Finish hashing and output digest value.\n *\n * @param[in,out] c    Hash context\n * @param[out]    dst  Buffer where output digest value is stored\n */\nvoid av_murmur3_final(struct AVMurMur3 *c, uint8_t dst[16]);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_MURMUR3_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/opt.h",
    "content": "/*\n * AVOptions\n * copyright (c) 2005 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_OPT_H\n#define AVUTIL_OPT_H\n\n/**\n * @file\n * AVOptions\n */\n\n#include \"rational.h\"\n#include \"avutil.h\"\n#include \"dict.h\"\n#include \"log.h\"\n#include \"pixfmt.h\"\n#include \"samplefmt.h\"\n#include \"version.h\"\n\n/**\n * @defgroup avoptions AVOptions\n * @ingroup lavu_data\n * @{\n * AVOptions provide a generic system to declare options on arbitrary structs\n * (\"objects\"). An option can have a help text, a type and a range of possible\n * values. Options may then be enumerated, read and written to.\n *\n * @section avoptions_implement Implementing AVOptions\n * This section describes how to add AVOptions capabilities to a struct.\n *\n * All AVOptions-related information is stored in an AVClass. Therefore\n * the first member of the struct should be a pointer to an AVClass describing it.\n * The option field of the AVClass must be set to a NULL-terminated static array\n * of AVOptions. Each AVOption must have a non-empty name, a type, a default\n * value and for number-type AVOptions also a range of allowed values. It must\n * also declare an offset in bytes from the start of the struct, where the field\n * associated with this AVOption is located. Other fields in the AVOption struct\n * should also be set when applicable, but are not required.\n *\n * The following example illustrates an AVOptions-enabled struct:\n * @code\n * typedef struct test_struct {\n *     const AVClass *class;\n *     int      int_opt;\n *     char    *str_opt;\n *     uint8_t *bin_opt;\n *     int      bin_len;\n * } test_struct;\n *\n * static const AVOption test_options[] = {\n *   { \"test_int\", \"This is a test option of int type.\", offsetof(test_struct, int_opt),\n *     AV_OPT_TYPE_INT, { .i64 = -1 }, INT_MIN, INT_MAX },\n *   { \"test_str\", \"This is a test option of string type.\", offsetof(test_struct, str_opt),\n *     AV_OPT_TYPE_STRING },\n *   { \"test_bin\", \"This is a test option of binary type.\", offsetof(test_struct, bin_opt),\n *     AV_OPT_TYPE_BINARY },\n *   { NULL },\n * };\n *\n * static const AVClass test_class = {\n *     .class_name = \"test class\",\n *     .item_name  = av_default_item_name,\n *     .option     = test_options,\n *     .version    = LIBAVUTIL_VERSION_INT,\n * };\n * @endcode\n *\n * Next, when allocating your struct, you must ensure that the AVClass pointer\n * is set to the correct value. Then, av_opt_set_defaults() can be called to\n * initialize defaults. After that the struct is ready to be used with the\n * AVOptions API.\n *\n * When cleaning up, you may use the av_opt_free() function to automatically\n * free all the allocated string and binary options.\n *\n * Continuing with the above example:\n *\n * @code\n * test_struct *alloc_test_struct(void)\n * {\n *     test_struct *ret = av_mallocz(sizeof(*ret));\n *     ret->class = &test_class;\n *     av_opt_set_defaults(ret);\n *     return ret;\n * }\n * void free_test_struct(test_struct **foo)\n * {\n *     av_opt_free(*foo);\n *     av_freep(foo);\n * }\n * @endcode\n *\n * @subsection avoptions_implement_nesting Nesting\n *      It may happen that an AVOptions-enabled struct contains another\n *      AVOptions-enabled struct as a member (e.g. AVCodecContext in\n *      libavcodec exports generic options, while its priv_data field exports\n *      codec-specific options). In such a case, it is possible to set up the\n *      parent struct to export a child's options. To do that, simply\n *      implement AVClass.child_next() and AVClass.child_class_next() in the\n *      parent struct's AVClass.\n *      Assuming that the test_struct from above now also contains a\n *      child_struct field:\n *\n *      @code\n *      typedef struct child_struct {\n *          AVClass *class;\n *          int flags_opt;\n *      } child_struct;\n *      static const AVOption child_opts[] = {\n *          { \"test_flags\", \"This is a test option of flags type.\",\n *            offsetof(child_struct, flags_opt), AV_OPT_TYPE_FLAGS, { .i64 = 0 }, INT_MIN, INT_MAX },\n *          { NULL },\n *      };\n *      static const AVClass child_class = {\n *          .class_name = \"child class\",\n *          .item_name  = av_default_item_name,\n *          .option     = child_opts,\n *          .version    = LIBAVUTIL_VERSION_INT,\n *      };\n *\n *      void *child_next(void *obj, void *prev)\n *      {\n *          test_struct *t = obj;\n *          if (!prev && t->child_struct)\n *              return t->child_struct;\n *          return NULL\n *      }\n *      const AVClass child_class_next(const AVClass *prev)\n *      {\n *          return prev ? NULL : &child_class;\n *      }\n *      @endcode\n *      Putting child_next() and child_class_next() as defined above into\n *      test_class will now make child_struct's options accessible through\n *      test_struct (again, proper setup as described above needs to be done on\n *      child_struct right after it is created).\n *\n *      From the above example it might not be clear why both child_next()\n *      and child_class_next() are needed. The distinction is that child_next()\n *      iterates over actually existing objects, while child_class_next()\n *      iterates over all possible child classes. E.g. if an AVCodecContext\n *      was initialized to use a codec which has private options, then its\n *      child_next() will return AVCodecContext.priv_data and finish\n *      iterating. OTOH child_class_next() on AVCodecContext.av_class will\n *      iterate over all available codecs with private options.\n *\n * @subsection avoptions_implement_named_constants Named constants\n *      It is possible to create named constants for options. Simply set the unit\n *      field of the option the constants should apply to a string and\n *      create the constants themselves as options of type AV_OPT_TYPE_CONST\n *      with their unit field set to the same string.\n *      Their default_val field should contain the value of the named\n *      constant.\n *      For example, to add some named constants for the test_flags option\n *      above, put the following into the child_opts array:\n *      @code\n *      { \"test_flags\", \"This is a test option of flags type.\",\n *        offsetof(child_struct, flags_opt), AV_OPT_TYPE_FLAGS, { .i64 = 0 }, INT_MIN, INT_MAX, \"test_unit\" },\n *      { \"flag1\", \"This is a flag with value 16\", 0, AV_OPT_TYPE_CONST, { .i64 = 16 }, 0, 0, \"test_unit\" },\n *      @endcode\n *\n * @section avoptions_use Using AVOptions\n * This section deals with accessing options in an AVOptions-enabled struct.\n * Such structs in FFmpeg are e.g. AVCodecContext in libavcodec or\n * AVFormatContext in libavformat.\n *\n * @subsection avoptions_use_examine Examining AVOptions\n * The basic functions for examining options are av_opt_next(), which iterates\n * over all options defined for one object, and av_opt_find(), which searches\n * for an option with the given name.\n *\n * The situation is more complicated with nesting. An AVOptions-enabled struct\n * may have AVOptions-enabled children. Passing the AV_OPT_SEARCH_CHILDREN flag\n * to av_opt_find() will make the function search children recursively.\n *\n * For enumerating there are basically two cases. The first is when you want to\n * get all options that may potentially exist on the struct and its children\n * (e.g.  when constructing documentation). In that case you should call\n * av_opt_child_class_next() recursively on the parent struct's AVClass.  The\n * second case is when you have an already initialized struct with all its\n * children and you want to get all options that can be actually written or read\n * from it. In that case you should call av_opt_child_next() recursively (and\n * av_opt_next() on each result).\n *\n * @subsection avoptions_use_get_set Reading and writing AVOptions\n * When setting options, you often have a string read directly from the\n * user. In such a case, simply passing it to av_opt_set() is enough. For\n * non-string type options, av_opt_set() will parse the string according to the\n * option type.\n *\n * Similarly av_opt_get() will read any option type and convert it to a string\n * which will be returned. Do not forget that the string is allocated, so you\n * have to free it with av_free().\n *\n * In some cases it may be more convenient to put all options into an\n * AVDictionary and call av_opt_set_dict() on it. A specific case of this\n * are the format/codec open functions in lavf/lavc which take a dictionary\n * filled with option as a parameter. This makes it possible to set some options\n * that cannot be set otherwise, since e.g. the input file format is not known\n * before the file is actually opened.\n */\n\nenum AVOptionType{\n    AV_OPT_TYPE_FLAGS,\n    AV_OPT_TYPE_INT,\n    AV_OPT_TYPE_INT64,\n    AV_OPT_TYPE_DOUBLE,\n    AV_OPT_TYPE_FLOAT,\n    AV_OPT_TYPE_STRING,\n    AV_OPT_TYPE_RATIONAL,\n    AV_OPT_TYPE_BINARY,  ///< offset must point to a pointer immediately followed by an int for the length\n    AV_OPT_TYPE_DICT,\n    AV_OPT_TYPE_UINT64,\n    AV_OPT_TYPE_CONST,\n    AV_OPT_TYPE_IMAGE_SIZE, ///< offset must point to two consecutive integers\n    AV_OPT_TYPE_PIXEL_FMT,\n    AV_OPT_TYPE_SAMPLE_FMT,\n    AV_OPT_TYPE_VIDEO_RATE, ///< offset must point to AVRational\n    AV_OPT_TYPE_DURATION,\n    AV_OPT_TYPE_COLOR,\n    AV_OPT_TYPE_CHANNEL_LAYOUT,\n    AV_OPT_TYPE_BOOL,\n};\n\n/**\n * AVOption\n */\ntypedef struct AVOption {\n    const char *name;\n\n    /**\n     * short English help text\n     * @todo What about other languages?\n     */\n    const char *help;\n\n    /**\n     * The offset relative to the context structure where the option\n     * value is stored. It should be 0 for named constants.\n     */\n    int offset;\n    enum AVOptionType type;\n\n    /**\n     * the default value for scalar options\n     */\n    union {\n        int64_t i64;\n        double dbl;\n        const char *str;\n        /* TODO those are unused now */\n        AVRational q;\n    } default_val;\n    double min;                 ///< minimum valid value for the option\n    double max;                 ///< maximum valid value for the option\n\n    int flags;\n#define AV_OPT_FLAG_ENCODING_PARAM  1   ///< a generic parameter which can be set by the user for muxing or encoding\n#define AV_OPT_FLAG_DECODING_PARAM  2   ///< a generic parameter which can be set by the user for demuxing or decoding\n#define AV_OPT_FLAG_AUDIO_PARAM     8\n#define AV_OPT_FLAG_VIDEO_PARAM     16\n#define AV_OPT_FLAG_SUBTITLE_PARAM  32\n/**\n * The option is intended for exporting values to the caller.\n */\n#define AV_OPT_FLAG_EXPORT          64\n/**\n * The option may not be set through the AVOptions API, only read.\n * This flag only makes sense when AV_OPT_FLAG_EXPORT is also set.\n */\n#define AV_OPT_FLAG_READONLY        128\n#define AV_OPT_FLAG_BSF_PARAM       (1<<8) ///< a generic parameter which can be set by the user for bit stream filtering\n#define AV_OPT_FLAG_FILTERING_PARAM (1<<16) ///< a generic parameter which can be set by the user for filtering\n//FIXME think about enc-audio, ... style flags\n\n    /**\n     * The logical unit to which the option belongs. Non-constant\n     * options and corresponding named constants share the same\n     * unit. May be NULL.\n     */\n    const char *unit;\n} AVOption;\n\n/**\n * A single allowed range of values, or a single allowed value.\n */\ntypedef struct AVOptionRange {\n    const char *str;\n    /**\n     * Value range.\n     * For string ranges this represents the min/max length.\n     * For dimensions this represents the min/max pixel count or width/height in multi-component case.\n     */\n    double value_min, value_max;\n    /**\n     * Value's component range.\n     * For string this represents the unicode range for chars, 0-127 limits to ASCII.\n     */\n    double component_min, component_max;\n    /**\n     * Range flag.\n     * If set to 1 the struct encodes a range, if set to 0 a single value.\n     */\n    int is_range;\n} AVOptionRange;\n\n/**\n * List of AVOptionRange structs.\n */\ntypedef struct AVOptionRanges {\n    /**\n     * Array of option ranges.\n     *\n     * Most of option types use just one component.\n     * Following describes multi-component option types:\n     *\n     * AV_OPT_TYPE_IMAGE_SIZE:\n     * component index 0: range of pixel count (width * height).\n     * component index 1: range of width.\n     * component index 2: range of height.\n     *\n     * @note To obtain multi-component version of this structure, user must\n     *       provide AV_OPT_MULTI_COMPONENT_RANGE to av_opt_query_ranges or\n     *       av_opt_query_ranges_default function.\n     *\n     * Multi-component range can be read as in following example:\n     *\n     * @code\n     * int range_index, component_index;\n     * AVOptionRanges *ranges;\n     * AVOptionRange *range[3]; //may require more than 3 in the future.\n     * av_opt_query_ranges(&ranges, obj, key, AV_OPT_MULTI_COMPONENT_RANGE);\n     * for (range_index = 0; range_index < ranges->nb_ranges; range_index++) {\n     *     for (component_index = 0; component_index < ranges->nb_components; component_index++)\n     *         range[component_index] = ranges->range[ranges->nb_ranges * component_index + range_index];\n     *     //do something with range here.\n     * }\n     * av_opt_freep_ranges(&ranges);\n     * @endcode\n     */\n    AVOptionRange **range;\n    /**\n     * Number of ranges per component.\n     */\n    int nb_ranges;\n    /**\n     * Number of componentes.\n     */\n    int nb_components;\n} AVOptionRanges;\n\n/**\n * Show the obj options.\n *\n * @param req_flags requested flags for the options to show. Show only the\n * options for which it is opt->flags & req_flags.\n * @param rej_flags rejected flags for the options to show. Show only the\n * options for which it is !(opt->flags & req_flags).\n * @param av_log_obj log context to use for showing the options\n */\nint av_opt_show2(void *obj, void *av_log_obj, int req_flags, int rej_flags);\n\n/**\n * Set the values of all AVOption fields to their default values.\n *\n * @param s an AVOption-enabled struct (its first member must be a pointer to AVClass)\n */\nvoid av_opt_set_defaults(void *s);\n\n/**\n * Set the values of all AVOption fields to their default values. Only these\n * AVOption fields for which (opt->flags & mask) == flags will have their\n * default applied to s.\n *\n * @param s an AVOption-enabled struct (its first member must be a pointer to AVClass)\n * @param mask combination of AV_OPT_FLAG_*\n * @param flags combination of AV_OPT_FLAG_*\n */\nvoid av_opt_set_defaults2(void *s, int mask, int flags);\n\n/**\n * Parse the key/value pairs list in opts. For each key/value pair\n * found, stores the value in the field in ctx that is named like the\n * key. ctx must be an AVClass context, storing is done using\n * AVOptions.\n *\n * @param opts options string to parse, may be NULL\n * @param key_val_sep a 0-terminated list of characters used to\n * separate key from value\n * @param pairs_sep a 0-terminated list of characters used to separate\n * two pairs from each other\n * @return the number of successfully set key/value pairs, or a negative\n * value corresponding to an AVERROR code in case of error:\n * AVERROR(EINVAL) if opts cannot be parsed,\n * the error code issued by av_opt_set() if a key/value pair\n * cannot be set\n */\nint av_set_options_string(void *ctx, const char *opts,\n                          const char *key_val_sep, const char *pairs_sep);\n\n/**\n * Parse the key-value pairs list in opts. For each key=value pair found,\n * set the value of the corresponding option in ctx.\n *\n * @param ctx          the AVClass object to set options on\n * @param opts         the options string, key-value pairs separated by a\n *                     delimiter\n * @param shorthand    a NULL-terminated array of options names for shorthand\n *                     notation: if the first field in opts has no key part,\n *                     the key is taken from the first element of shorthand;\n *                     then again for the second, etc., until either opts is\n *                     finished, shorthand is finished or a named option is\n *                     found; after that, all options must be named\n * @param key_val_sep  a 0-terminated list of characters used to separate\n *                     key from value, for example '='\n * @param pairs_sep    a 0-terminated list of characters used to separate\n *                     two pairs from each other, for example ':' or ','\n * @return  the number of successfully set key=value pairs, or a negative\n *          value corresponding to an AVERROR code in case of error:\n *          AVERROR(EINVAL) if opts cannot be parsed,\n *          the error code issued by av_set_string3() if a key/value pair\n *          cannot be set\n *\n * Options names must use only the following characters: a-z A-Z 0-9 - . / _\n * Separators must use characters distinct from option names and from each\n * other.\n */\nint av_opt_set_from_string(void *ctx, const char *opts,\n                           const char *const *shorthand,\n                           const char *key_val_sep, const char *pairs_sep);\n/**\n * Free all allocated objects in obj.\n */\nvoid av_opt_free(void *obj);\n\n/**\n * Check whether a particular flag is set in a flags field.\n *\n * @param field_name the name of the flag field option\n * @param flag_name the name of the flag to check\n * @return non-zero if the flag is set, zero if the flag isn't set,\n *         isn't of the right type, or the flags field doesn't exist.\n */\nint av_opt_flag_is_set(void *obj, const char *field_name, const char *flag_name);\n\n/**\n * Set all the options from a given dictionary on an object.\n *\n * @param obj a struct whose first element is a pointer to AVClass\n * @param options options to process. This dictionary will be freed and replaced\n *                by a new one containing all options not found in obj.\n *                Of course this new dictionary needs to be freed by caller\n *                with av_dict_free().\n *\n * @return 0 on success, a negative AVERROR if some option was found in obj,\n *         but could not be set.\n *\n * @see av_dict_copy()\n */\nint av_opt_set_dict(void *obj, struct AVDictionary **options);\n\n\n/**\n * Set all the options from a given dictionary on an object.\n *\n * @param obj a struct whose first element is a pointer to AVClass\n * @param options options to process. This dictionary will be freed and replaced\n *                by a new one containing all options not found in obj.\n *                Of course this new dictionary needs to be freed by caller\n *                with av_dict_free().\n * @param search_flags A combination of AV_OPT_SEARCH_*.\n *\n * @return 0 on success, a negative AVERROR if some option was found in obj,\n *         but could not be set.\n *\n * @see av_dict_copy()\n */\nint av_opt_set_dict2(void *obj, struct AVDictionary **options, int search_flags);\n\n/**\n * Extract a key-value pair from the beginning of a string.\n *\n * @param ropts        pointer to the options string, will be updated to\n *                     point to the rest of the string (one of the pairs_sep\n *                     or the final NUL)\n * @param key_val_sep  a 0-terminated list of characters used to separate\n *                     key from value, for example '='\n * @param pairs_sep    a 0-terminated list of characters used to separate\n *                     two pairs from each other, for example ':' or ','\n * @param flags        flags; see the AV_OPT_FLAG_* values below\n * @param rkey         parsed key; must be freed using av_free()\n * @param rval         parsed value; must be freed using av_free()\n *\n * @return  >=0 for success, or a negative value corresponding to an\n *          AVERROR code in case of error; in particular:\n *          AVERROR(EINVAL) if no key is present\n *\n */\nint av_opt_get_key_value(const char **ropts,\n                         const char *key_val_sep, const char *pairs_sep,\n                         unsigned flags,\n                         char **rkey, char **rval);\n\nenum {\n\n    /**\n     * Accept to parse a value without a key; the key will then be returned\n     * as NULL.\n     */\n    AV_OPT_FLAG_IMPLICIT_KEY = 1,\n};\n\n/**\n * @defgroup opt_eval_funcs Evaluating option strings\n * @{\n * This group of functions can be used to evaluate option strings\n * and get numbers out of them. They do the same thing as av_opt_set(),\n * except the result is written into the caller-supplied pointer.\n *\n * @param obj a struct whose first element is a pointer to AVClass.\n * @param o an option for which the string is to be evaluated.\n * @param val string to be evaluated.\n * @param *_out value of the string will be written here.\n *\n * @return 0 on success, a negative number on failure.\n */\nint av_opt_eval_flags (void *obj, const AVOption *o, const char *val, int        *flags_out);\nint av_opt_eval_int   (void *obj, const AVOption *o, const char *val, int        *int_out);\nint av_opt_eval_int64 (void *obj, const AVOption *o, const char *val, int64_t    *int64_out);\nint av_opt_eval_float (void *obj, const AVOption *o, const char *val, float      *float_out);\nint av_opt_eval_double(void *obj, const AVOption *o, const char *val, double     *double_out);\nint av_opt_eval_q     (void *obj, const AVOption *o, const char *val, AVRational *q_out);\n/**\n * @}\n */\n\n#define AV_OPT_SEARCH_CHILDREN   (1 << 0) /**< Search in possible children of the\n                                               given object first. */\n/**\n *  The obj passed to av_opt_find() is fake -- only a double pointer to AVClass\n *  instead of a required pointer to a struct containing AVClass. This is\n *  useful for searching for options without needing to allocate the corresponding\n *  object.\n */\n#define AV_OPT_SEARCH_FAKE_OBJ   (1 << 1)\n\n/**\n *  In av_opt_get, return NULL if the option has a pointer type and is set to NULL,\n *  rather than returning an empty string.\n */\n#define AV_OPT_ALLOW_NULL (1 << 2)\n\n/**\n *  Allows av_opt_query_ranges and av_opt_query_ranges_default to return more than\n *  one component for certain option types.\n *  @see AVOptionRanges for details.\n */\n#define AV_OPT_MULTI_COMPONENT_RANGE (1 << 12)\n\n/**\n * Look for an option in an object. Consider only options which\n * have all the specified flags set.\n *\n * @param[in] obj A pointer to a struct whose first element is a\n *                pointer to an AVClass.\n *                Alternatively a double pointer to an AVClass, if\n *                AV_OPT_SEARCH_FAKE_OBJ search flag is set.\n * @param[in] name The name of the option to look for.\n * @param[in] unit When searching for named constants, name of the unit\n *                 it belongs to.\n * @param opt_flags Find only options with all the specified flags set (AV_OPT_FLAG).\n * @param search_flags A combination of AV_OPT_SEARCH_*.\n *\n * @return A pointer to the option found, or NULL if no option\n *         was found.\n *\n * @note Options found with AV_OPT_SEARCH_CHILDREN flag may not be settable\n * directly with av_opt_set(). Use special calls which take an options\n * AVDictionary (e.g. avformat_open_input()) to set options found with this\n * flag.\n */\nconst AVOption *av_opt_find(void *obj, const char *name, const char *unit,\n                            int opt_flags, int search_flags);\n\n/**\n * Look for an option in an object. Consider only options which\n * have all the specified flags set.\n *\n * @param[in] obj A pointer to a struct whose first element is a\n *                pointer to an AVClass.\n *                Alternatively a double pointer to an AVClass, if\n *                AV_OPT_SEARCH_FAKE_OBJ search flag is set.\n * @param[in] name The name of the option to look for.\n * @param[in] unit When searching for named constants, name of the unit\n *                 it belongs to.\n * @param opt_flags Find only options with all the specified flags set (AV_OPT_FLAG).\n * @param search_flags A combination of AV_OPT_SEARCH_*.\n * @param[out] target_obj if non-NULL, an object to which the option belongs will be\n * written here. It may be different from obj if AV_OPT_SEARCH_CHILDREN is present\n * in search_flags. This parameter is ignored if search_flags contain\n * AV_OPT_SEARCH_FAKE_OBJ.\n *\n * @return A pointer to the option found, or NULL if no option\n *         was found.\n */\nconst AVOption *av_opt_find2(void *obj, const char *name, const char *unit,\n                             int opt_flags, int search_flags, void **target_obj);\n\n/**\n * Iterate over all AVOptions belonging to obj.\n *\n * @param obj an AVOptions-enabled struct or a double pointer to an\n *            AVClass describing it.\n * @param prev result of the previous call to av_opt_next() on this object\n *             or NULL\n * @return next AVOption or NULL\n */\nconst AVOption *av_opt_next(const void *obj, const AVOption *prev);\n\n/**\n * Iterate over AVOptions-enabled children of obj.\n *\n * @param prev result of a previous call to this function or NULL\n * @return next AVOptions-enabled child or NULL\n */\nvoid *av_opt_child_next(void *obj, void *prev);\n\n/**\n * Iterate over potential AVOptions-enabled children of parent.\n *\n * @param prev result of a previous call to this function or NULL\n * @return AVClass corresponding to next potential child or NULL\n */\nconst AVClass *av_opt_child_class_next(const AVClass *parent, const AVClass *prev);\n\n/**\n * @defgroup opt_set_funcs Option setting functions\n * @{\n * Those functions set the field of obj with the given name to value.\n *\n * @param[in] obj A struct whose first element is a pointer to an AVClass.\n * @param[in] name the name of the field to set\n * @param[in] val The value to set. In case of av_opt_set() if the field is not\n * of a string type, then the given string is parsed.\n * SI postfixes and some named scalars are supported.\n * If the field is of a numeric type, it has to be a numeric or named\n * scalar. Behavior with more than one scalar and +- infix operators\n * is undefined.\n * If the field is of a flags type, it has to be a sequence of numeric\n * scalars or named flags separated by '+' or '-'. Prefixing a flag\n * with '+' causes it to be set without affecting the other flags;\n * similarly, '-' unsets a flag.\n * @param search_flags flags passed to av_opt_find2. I.e. if AV_OPT_SEARCH_CHILDREN\n * is passed here, then the option may be set on a child of obj.\n *\n * @return 0 if the value has been set, or an AVERROR code in case of\n * error:\n * AVERROR_OPTION_NOT_FOUND if no matching option exists\n * AVERROR(ERANGE) if the value is out of range\n * AVERROR(EINVAL) if the value is not valid\n */\nint av_opt_set         (void *obj, const char *name, const char *val, int search_flags);\nint av_opt_set_int     (void *obj, const char *name, int64_t     val, int search_flags);\nint av_opt_set_double  (void *obj, const char *name, double      val, int search_flags);\nint av_opt_set_q       (void *obj, const char *name, AVRational  val, int search_flags);\nint av_opt_set_bin     (void *obj, const char *name, const uint8_t *val, int size, int search_flags);\nint av_opt_set_image_size(void *obj, const char *name, int w, int h, int search_flags);\nint av_opt_set_pixel_fmt (void *obj, const char *name, enum AVPixelFormat fmt, int search_flags);\nint av_opt_set_sample_fmt(void *obj, const char *name, enum AVSampleFormat fmt, int search_flags);\nint av_opt_set_video_rate(void *obj, const char *name, AVRational val, int search_flags);\nint av_opt_set_channel_layout(void *obj, const char *name, int64_t ch_layout, int search_flags);\n/**\n * @note Any old dictionary present is discarded and replaced with a copy of the new one. The\n * caller still owns val is and responsible for freeing it.\n */\nint av_opt_set_dict_val(void *obj, const char *name, const AVDictionary *val, int search_flags);\n\n/**\n * Set a binary option to an integer list.\n *\n * @param obj    AVClass object to set options on\n * @param name   name of the binary option\n * @param val    pointer to an integer list (must have the correct type with\n *               regard to the contents of the list)\n * @param term   list terminator (usually 0 or -1)\n * @param flags  search flags\n */\n#define av_opt_set_int_list(obj, name, val, term, flags) \\\n    (av_int_list_length(val, term) > INT_MAX / sizeof(*(val)) ? \\\n     AVERROR(EINVAL) : \\\n     av_opt_set_bin(obj, name, (const uint8_t *)(val), \\\n                    av_int_list_length(val, term) * sizeof(*(val)), flags))\n\n/**\n * @}\n */\n\n/**\n * @defgroup opt_get_funcs Option getting functions\n * @{\n * Those functions get a value of the option with the given name from an object.\n *\n * @param[in] obj a struct whose first element is a pointer to an AVClass.\n * @param[in] name name of the option to get.\n * @param[in] search_flags flags passed to av_opt_find2. I.e. if AV_OPT_SEARCH_CHILDREN\n * is passed here, then the option may be found in a child of obj.\n * @param[out] out_val value of the option will be written here\n * @return >=0 on success, a negative error code otherwise\n */\n/**\n * @note the returned string will be av_malloc()ed and must be av_free()ed by the caller\n *\n * @note if AV_OPT_ALLOW_NULL is set in search_flags in av_opt_get, and the option has\n * AV_OPT_TYPE_STRING or AV_OPT_TYPE_BINARY and is set to NULL, *out_val will be set\n * to NULL instead of an allocated empty string.\n */\nint av_opt_get         (void *obj, const char *name, int search_flags, uint8_t   **out_val);\nint av_opt_get_int     (void *obj, const char *name, int search_flags, int64_t    *out_val);\nint av_opt_get_double  (void *obj, const char *name, int search_flags, double     *out_val);\nint av_opt_get_q       (void *obj, const char *name, int search_flags, AVRational *out_val);\nint av_opt_get_image_size(void *obj, const char *name, int search_flags, int *w_out, int *h_out);\nint av_opt_get_pixel_fmt (void *obj, const char *name, int search_flags, enum AVPixelFormat *out_fmt);\nint av_opt_get_sample_fmt(void *obj, const char *name, int search_flags, enum AVSampleFormat *out_fmt);\nint av_opt_get_video_rate(void *obj, const char *name, int search_flags, AVRational *out_val);\nint av_opt_get_channel_layout(void *obj, const char *name, int search_flags, int64_t *ch_layout);\n/**\n * @param[out] out_val The returned dictionary is a copy of the actual value and must\n * be freed with av_dict_free() by the caller\n */\nint av_opt_get_dict_val(void *obj, const char *name, int search_flags, AVDictionary **out_val);\n/**\n * @}\n */\n/**\n * Gets a pointer to the requested field in a struct.\n * This function allows accessing a struct even when its fields are moved or\n * renamed since the application making the access has been compiled,\n *\n * @returns a pointer to the field, it can be cast to the correct type and read\n *          or written to.\n */\nvoid *av_opt_ptr(const AVClass *avclass, void *obj, const char *name);\n\n/**\n * Free an AVOptionRanges struct and set it to NULL.\n */\nvoid av_opt_freep_ranges(AVOptionRanges **ranges);\n\n/**\n * Get a list of allowed ranges for the given option.\n *\n * The returned list may depend on other fields in obj like for example profile.\n *\n * @param flags is a bitmask of flags, undefined flags should not be set and should be ignored\n *              AV_OPT_SEARCH_FAKE_OBJ indicates that the obj is a double pointer to a AVClass instead of a full instance\n *              AV_OPT_MULTI_COMPONENT_RANGE indicates that function may return more than one component, @see AVOptionRanges\n *\n * The result must be freed with av_opt_freep_ranges.\n *\n * @return number of compontents returned on success, a negative errro code otherwise\n */\nint av_opt_query_ranges(AVOptionRanges **, void *obj, const char *key, int flags);\n\n/**\n * Copy options from src object into dest object.\n *\n * Options that require memory allocation (e.g. string or binary) are malloc'ed in dest object.\n * Original memory allocated for such options is freed unless both src and dest options points to the same memory.\n *\n * @param dest Object to copy from\n * @param src  Object to copy into\n * @return 0 on success, negative on error\n */\nint av_opt_copy(void *dest, const void *src);\n\n/**\n * Get a default list of allowed ranges for the given option.\n *\n * This list is constructed without using the AVClass.query_ranges() callback\n * and can be used as fallback from within the callback.\n *\n * @param flags is a bitmask of flags, undefined flags should not be set and should be ignored\n *              AV_OPT_SEARCH_FAKE_OBJ indicates that the obj is a double pointer to a AVClass instead of a full instance\n *              AV_OPT_MULTI_COMPONENT_RANGE indicates that function may return more than one component, @see AVOptionRanges\n *\n * The result must be freed with av_opt_free_ranges.\n *\n * @return number of compontents returned on success, a negative errro code otherwise\n */\nint av_opt_query_ranges_default(AVOptionRanges **, void *obj, const char *key, int flags);\n\n/**\n * Check if given option is set to its default value.\n *\n * Options o must belong to the obj. This function must not be called to check child's options state.\n * @see av_opt_is_set_to_default_by_name().\n *\n * @param obj  AVClass object to check option on\n * @param o    option to be checked\n * @return     >0 when option is set to its default,\n *              0 when option is not set its default,\n *             <0 on error\n */\nint av_opt_is_set_to_default(void *obj, const AVOption *o);\n\n/**\n * Check if given option is set to its default value.\n *\n * @param obj          AVClass object to check option on\n * @param name         option name\n * @param search_flags combination of AV_OPT_SEARCH_*\n * @return             >0 when option is set to its default,\n *                     0 when option is not set its default,\n *                     <0 on error\n */\nint av_opt_is_set_to_default_by_name(void *obj, const char *name, int search_flags);\n\n\n#define AV_OPT_SERIALIZE_SKIP_DEFAULTS              0x00000001  ///< Serialize options that are not set to default values only.\n#define AV_OPT_SERIALIZE_OPT_FLAGS_EXACT            0x00000002  ///< Serialize options that exactly match opt_flags only.\n\n/**\n * Serialize object's options.\n *\n * Create a string containing object's serialized options.\n * Such string may be passed back to av_opt_set_from_string() in order to restore option values.\n * A key/value or pairs separator occurring in the serialized value or\n * name string are escaped through the av_escape() function.\n *\n * @param[in]  obj           AVClass object to serialize\n * @param[in]  opt_flags     serialize options with all the specified flags set (AV_OPT_FLAG)\n * @param[in]  flags         combination of AV_OPT_SERIALIZE_* flags\n * @param[out] buffer        Pointer to buffer that will be allocated with string containg serialized options.\n *                           Buffer must be freed by the caller when is no longer needed.\n * @param[in]  key_val_sep   character used to separate key from value\n * @param[in]  pairs_sep     character used to separate two pairs from each other\n * @return                   >= 0 on success, negative on error\n * @warning Separators cannot be neither '\\\\' nor '\\0'. They also cannot be the same.\n */\nint av_opt_serialize(void *obj, int opt_flags, int flags, char **buffer,\n                     const char key_val_sep, const char pairs_sep);\n/**\n * @}\n */\n\n#endif /* AVUTIL_OPT_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/parseutils.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_PARSEUTILS_H\n#define AVUTIL_PARSEUTILS_H\n\n#include <time.h>\n\n#include \"rational.h\"\n\n/**\n * @file\n * misc parsing utilities\n */\n\n/**\n * Parse str and store the parsed ratio in q.\n *\n * Note that a ratio with infinite (1/0) or negative value is\n * considered valid, so you should check on the returned value if you\n * want to exclude those values.\n *\n * The undefined value can be expressed using the \"0:0\" string.\n *\n * @param[in,out] q pointer to the AVRational which will contain the ratio\n * @param[in] str the string to parse: it has to be a string in the format\n * num:den, a float number or an expression\n * @param[in] max the maximum allowed numerator and denominator\n * @param[in] log_offset log level offset which is applied to the log\n * level of log_ctx\n * @param[in] log_ctx parent logging context\n * @return >= 0 on success, a negative error code otherwise\n */\nint av_parse_ratio(AVRational *q, const char *str, int max,\n                   int log_offset, void *log_ctx);\n\n#define av_parse_ratio_quiet(rate, str, max) \\\n    av_parse_ratio(rate, str, max, AV_LOG_MAX_OFFSET, NULL)\n\n/**\n * Parse str and put in width_ptr and height_ptr the detected values.\n *\n * @param[in,out] width_ptr pointer to the variable which will contain the detected\n * width value\n * @param[in,out] height_ptr pointer to the variable which will contain the detected\n * height value\n * @param[in] str the string to parse: it has to be a string in the format\n * width x height or a valid video size abbreviation.\n * @return >= 0 on success, a negative error code otherwise\n */\nint av_parse_video_size(int *width_ptr, int *height_ptr, const char *str);\n\n/**\n * Parse str and store the detected values in *rate.\n *\n * @param[in,out] rate pointer to the AVRational which will contain the detected\n * frame rate\n * @param[in] str the string to parse: it has to be a string in the format\n * rate_num / rate_den, a float number or a valid video rate abbreviation\n * @return >= 0 on success, a negative error code otherwise\n */\nint av_parse_video_rate(AVRational *rate, const char *str);\n\n/**\n * Put the RGBA values that correspond to color_string in rgba_color.\n *\n * @param color_string a string specifying a color. It can be the name of\n * a color (case insensitive match) or a [0x|#]RRGGBB[AA] sequence,\n * possibly followed by \"@\" and a string representing the alpha\n * component.\n * The alpha component may be a string composed by \"0x\" followed by an\n * hexadecimal number or a decimal number between 0.0 and 1.0, which\n * represents the opacity value (0x00/0.0 means completely transparent,\n * 0xff/1.0 completely opaque).\n * If the alpha component is not specified then 0xff is assumed.\n * The string \"random\" will result in a random color.\n * @param slen length of the initial part of color_string containing the\n * color. It can be set to -1 if color_string is a null terminated string\n * containing nothing else than the color.\n * @return >= 0 in case of success, a negative value in case of\n * failure (for example if color_string cannot be parsed).\n */\nint av_parse_color(uint8_t *rgba_color, const char *color_string, int slen,\n                   void *log_ctx);\n\n/**\n * Get the name of a color from the internal table of hard-coded named\n * colors.\n *\n * This function is meant to enumerate the color names recognized by\n * av_parse_color().\n *\n * @param color_idx index of the requested color, starting from 0\n * @param rgbp      if not NULL, will point to a 3-elements array with the color value in RGB\n * @return the color name string or NULL if color_idx is not in the array\n */\nconst char *av_get_known_color_name(int color_idx, const uint8_t **rgb);\n\n/**\n * Parse timestr and return in *time a corresponding number of\n * microseconds.\n *\n * @param timeval puts here the number of microseconds corresponding\n * to the string in timestr. If the string represents a duration, it\n * is the number of microseconds contained in the time interval.  If\n * the string is a date, is the number of microseconds since 1st of\n * January, 1970 up to the time of the parsed date.  If timestr cannot\n * be successfully parsed, set *time to INT64_MIN.\n\n * @param timestr a string representing a date or a duration.\n * - If a date the syntax is:\n * @code\n * [{YYYY-MM-DD|YYYYMMDD}[T|t| ]]{{HH:MM:SS[.m...]]]}|{HHMMSS[.m...]]]}}[Z]\n * now\n * @endcode\n * If the value is \"now\" it takes the current time.\n * Time is local time unless Z is appended, in which case it is\n * interpreted as UTC.\n * If the year-month-day part is not specified it takes the current\n * year-month-day.\n * - If a duration the syntax is:\n * @code\n * [-][HH:]MM:SS[.m...]\n * [-]S+[.m...]\n * @endcode\n * @param duration flag which tells how to interpret timestr, if not\n * zero timestr is interpreted as a duration, otherwise as a date\n * @return >= 0 in case of success, a negative value corresponding to an\n * AVERROR code otherwise\n */\nint av_parse_time(int64_t *timeval, const char *timestr, int duration);\n\n/**\n * Attempt to find a specific tag in a URL.\n *\n * syntax: '?tag1=val1&tag2=val2...'. Little URL decoding is done.\n * Return 1 if found.\n */\nint av_find_info_tag(char *arg, int arg_size, const char *tag1, const char *info);\n\n/**\n * Simplified version of strptime\n *\n * Parse the input string p according to the format string fmt and\n * store its results in the structure dt.\n * This implementation supports only a subset of the formats supported\n * by the standard strptime().\n *\n * The supported input field descriptors are listed below.\n * - %H: the hour as a decimal number, using a 24-hour clock, in the\n *   range '00' through '23'\n * - %J: hours as a decimal number, in the range '0' through INT_MAX\n * - %M: the minute as a decimal number, using a 24-hour clock, in the\n *   range '00' through '59'\n * - %S: the second as a decimal number, using a 24-hour clock, in the\n *   range '00' through '59'\n * - %Y: the year as a decimal number, using the Gregorian calendar\n * - %m: the month as a decimal number, in the range '1' through '12'\n * - %d: the day of the month as a decimal number, in the range '1'\n *   through '31'\n * - %T: alias for '%H:%M:%S'\n * - %%: a literal '%'\n *\n * @return a pointer to the first character not processed in this function\n *         call. In case the input string contains more characters than\n *         required by the format string the return value points right after\n *         the last consumed input character. In case the whole input string\n *         is consumed the return value points to the null byte at the end of\n *         the string. On failure NULL is returned.\n */\nchar *av_small_strptime(const char *p, const char *fmt, struct tm *dt);\n\n/**\n * Convert the decomposed UTC time in tm to a time_t value.\n */\ntime_t av_timegm(struct tm *tm);\n\n#endif /* AVUTIL_PARSEUTILS_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/pixdesc.h",
    "content": "/*\n * pixel format descriptor\n * Copyright (c) 2009 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_PIXDESC_H\n#define AVUTIL_PIXDESC_H\n\n#include <inttypes.h>\n\n#include \"attributes.h\"\n#include \"pixfmt.h\"\n#include \"version.h\"\n\ntypedef struct AVComponentDescriptor {\n    /**\n     * Which of the 4 planes contains the component.\n     */\n    int plane;\n\n    /**\n     * Number of elements between 2 horizontally consecutive pixels.\n     * Elements are bits for bitstream formats, bytes otherwise.\n     */\n    int step;\n\n    /**\n     * Number of elements before the component of the first pixel.\n     * Elements are bits for bitstream formats, bytes otherwise.\n     */\n    int offset;\n\n    /**\n     * Number of least significant bits that must be shifted away\n     * to get the value.\n     */\n    int shift;\n\n    /**\n     * Number of bits in the component.\n     */\n    int depth;\n\n#if FF_API_PLUS1_MINUS1\n    /** deprecated, use step instead */\n    attribute_deprecated int step_minus1;\n\n    /** deprecated, use depth instead */\n    attribute_deprecated int depth_minus1;\n\n    /** deprecated, use offset instead */\n    attribute_deprecated int offset_plus1;\n#endif\n} AVComponentDescriptor;\n\n/**\n * Descriptor that unambiguously describes how the bits of a pixel are\n * stored in the up to 4 data planes of an image. It also stores the\n * subsampling factors and number of components.\n *\n * @note This is separate of the colorspace (RGB, YCbCr, YPbPr, JPEG-style YUV\n *       and all the YUV variants) AVPixFmtDescriptor just stores how values\n *       are stored not what these values represent.\n */\ntypedef struct AVPixFmtDescriptor {\n    const char *name;\n    uint8_t nb_components;  ///< The number of components each pixel has, (1-4)\n\n    /**\n     * Amount to shift the luma width right to find the chroma width.\n     * For YV12 this is 1 for example.\n     * chroma_width = AV_CEIL_RSHIFT(luma_width, log2_chroma_w)\n     * The note above is needed to ensure rounding up.\n     * This value only refers to the chroma components.\n     */\n    uint8_t log2_chroma_w;\n\n    /**\n     * Amount to shift the luma height right to find the chroma height.\n     * For YV12 this is 1 for example.\n     * chroma_height= AV_CEIL_RSHIFT(luma_height, log2_chroma_h)\n     * The note above is needed to ensure rounding up.\n     * This value only refers to the chroma components.\n     */\n    uint8_t log2_chroma_h;\n\n    /**\n     * Combination of AV_PIX_FMT_FLAG_... flags.\n     */\n    uint64_t flags;\n\n    /**\n     * Parameters that describe how pixels are packed.\n     * If the format has 1 or 2 components, then luma is 0.\n     * If the format has 3 or 4 components:\n     *   if the RGB flag is set then 0 is red, 1 is green and 2 is blue;\n     *   otherwise 0 is luma, 1 is chroma-U and 2 is chroma-V.\n     *\n     * If present, the Alpha channel is always the last component.\n     */\n    AVComponentDescriptor comp[4];\n\n    /**\n     * Alternative comma-separated names.\n     */\n    const char *alias;\n} AVPixFmtDescriptor;\n\n/**\n * Pixel format is big-endian.\n */\n#define AV_PIX_FMT_FLAG_BE           (1 << 0)\n/**\n * Pixel format has a palette in data[1], values are indexes in this palette.\n */\n#define AV_PIX_FMT_FLAG_PAL          (1 << 1)\n/**\n * All values of a component are bit-wise packed end to end.\n */\n#define AV_PIX_FMT_FLAG_BITSTREAM    (1 << 2)\n/**\n * Pixel format is an HW accelerated format.\n */\n#define AV_PIX_FMT_FLAG_HWACCEL      (1 << 3)\n/**\n * At least one pixel component is not in the first data plane.\n */\n#define AV_PIX_FMT_FLAG_PLANAR       (1 << 4)\n/**\n * The pixel format contains RGB-like data (as opposed to YUV/grayscale).\n */\n#define AV_PIX_FMT_FLAG_RGB          (1 << 5)\n\n/**\n * The pixel format is \"pseudo-paletted\". This means that it contains a\n * fixed palette in the 2nd plane but the palette is fixed/constant for each\n * PIX_FMT. This allows interpreting the data as if it was PAL8, which can\n * in some cases be simpler. Or the data can be interpreted purely based on\n * the pixel format without using the palette.\n * An example of a pseudo-paletted format is AV_PIX_FMT_GRAY8\n *\n * @deprecated This flag is deprecated, and will be removed. When it is removed,\n * the extra palette allocation in AVFrame.data[1] is removed as well. Only\n * actual paletted formats (as indicated by AV_PIX_FMT_FLAG_PAL) will have a\n * palette. Starting with FFmpeg versions which have this flag deprecated, the\n * extra \"pseudo\" palette is already ignored, and API users are not required to\n * allocate a palette for AV_PIX_FMT_FLAG_PSEUDOPAL formats (it was required\n * before the deprecation, though).\n */\n#define AV_PIX_FMT_FLAG_PSEUDOPAL    (1 << 6)\n\n/**\n * The pixel format has an alpha channel. This is set on all formats that\n * support alpha in some way. The exception is AV_PIX_FMT_PAL8, which can\n * carry alpha as part of the palette. Details are explained in the\n * AVPixelFormat enum, and are also encoded in the corresponding\n * AVPixFmtDescriptor.\n *\n * The alpha is always straight, never pre-multiplied.\n *\n * If a codec or a filter does not support alpha, it should set all alpha to\n * opaque, or use the equivalent pixel formats without alpha component, e.g.\n * AV_PIX_FMT_RGB0 (or AV_PIX_FMT_RGB24 etc.) instead of AV_PIX_FMT_RGBA.\n */\n#define AV_PIX_FMT_FLAG_ALPHA        (1 << 7)\n\n/**\n * The pixel format is following a Bayer pattern\n */\n#define AV_PIX_FMT_FLAG_BAYER        (1 << 8)\n\n/**\n * The pixel format contains IEEE-754 floating point values. Precision (double,\n * single, or half) should be determined by the pixel size (64, 32, or 16 bits).\n */\n#define AV_PIX_FMT_FLAG_FLOAT        (1 << 9)\n\n/**\n * Return the number of bits per pixel used by the pixel format\n * described by pixdesc. Note that this is not the same as the number\n * of bits per sample.\n *\n * The returned number of bits refers to the number of bits actually\n * used for storing the pixel information, that is padding bits are\n * not counted.\n */\nint av_get_bits_per_pixel(const AVPixFmtDescriptor *pixdesc);\n\n/**\n * Return the number of bits per pixel for the pixel format\n * described by pixdesc, including any padding or unused bits.\n */\nint av_get_padded_bits_per_pixel(const AVPixFmtDescriptor *pixdesc);\n\n/**\n * @return a pixel format descriptor for provided pixel format or NULL if\n * this pixel format is unknown.\n */\nconst AVPixFmtDescriptor *av_pix_fmt_desc_get(enum AVPixelFormat pix_fmt);\n\n/**\n * Iterate over all pixel format descriptors known to libavutil.\n *\n * @param prev previous descriptor. NULL to get the first descriptor.\n *\n * @return next descriptor or NULL after the last descriptor\n */\nconst AVPixFmtDescriptor *av_pix_fmt_desc_next(const AVPixFmtDescriptor *prev);\n\n/**\n * @return an AVPixelFormat id described by desc, or AV_PIX_FMT_NONE if desc\n * is not a valid pointer to a pixel format descriptor.\n */\nenum AVPixelFormat av_pix_fmt_desc_get_id(const AVPixFmtDescriptor *desc);\n\n/**\n * Utility function to access log2_chroma_w log2_chroma_h from\n * the pixel format AVPixFmtDescriptor.\n *\n * @param[in]  pix_fmt the pixel format\n * @param[out] h_shift store log2_chroma_w (horizontal/width shift)\n * @param[out] v_shift store log2_chroma_h (vertical/height shift)\n *\n * @return 0 on success, AVERROR(ENOSYS) on invalid or unknown pixel format\n */\nint av_pix_fmt_get_chroma_sub_sample(enum AVPixelFormat pix_fmt,\n                                     int *h_shift, int *v_shift);\n\n/**\n * @return number of planes in pix_fmt, a negative AVERROR if pix_fmt is not a\n * valid pixel format.\n */\nint av_pix_fmt_count_planes(enum AVPixelFormat pix_fmt);\n\n/**\n * @return the name for provided color range or NULL if unknown.\n */\nconst char *av_color_range_name(enum AVColorRange range);\n\n/**\n * @return the AVColorRange value for name or an AVError if not found.\n */\nint av_color_range_from_name(const char *name);\n\n/**\n * @return the name for provided color primaries or NULL if unknown.\n */\nconst char *av_color_primaries_name(enum AVColorPrimaries primaries);\n\n/**\n * @return the AVColorPrimaries value for name or an AVError if not found.\n */\nint av_color_primaries_from_name(const char *name);\n\n/**\n * @return the name for provided color transfer or NULL if unknown.\n */\nconst char *av_color_transfer_name(enum AVColorTransferCharacteristic transfer);\n\n/**\n * @return the AVColorTransferCharacteristic value for name or an AVError if not found.\n */\nint av_color_transfer_from_name(const char *name);\n\n/**\n * @return the name for provided color space or NULL if unknown.\n */\nconst char *av_color_space_name(enum AVColorSpace space);\n\n/**\n * @return the AVColorSpace value for name or an AVError if not found.\n */\nint av_color_space_from_name(const char *name);\n\n/**\n * @return the name for provided chroma location or NULL if unknown.\n */\nconst char *av_chroma_location_name(enum AVChromaLocation location);\n\n/**\n * @return the AVChromaLocation value for name or an AVError if not found.\n */\nint av_chroma_location_from_name(const char *name);\n\n/**\n * Return the pixel format corresponding to name.\n *\n * If there is no pixel format with name name, then looks for a\n * pixel format with the name corresponding to the native endian\n * format of name.\n * For example in a little-endian system, first looks for \"gray16\",\n * then for \"gray16le\".\n *\n * Finally if no pixel format has been found, returns AV_PIX_FMT_NONE.\n */\nenum AVPixelFormat av_get_pix_fmt(const char *name);\n\n/**\n * Return the short name for a pixel format, NULL in case pix_fmt is\n * unknown.\n *\n * @see av_get_pix_fmt(), av_get_pix_fmt_string()\n */\nconst char *av_get_pix_fmt_name(enum AVPixelFormat pix_fmt);\n\n/**\n * Print in buf the string corresponding to the pixel format with\n * number pix_fmt, or a header if pix_fmt is negative.\n *\n * @param buf the buffer where to write the string\n * @param buf_size the size of buf\n * @param pix_fmt the number of the pixel format to print the\n * corresponding info string, or a negative value to print the\n * corresponding header.\n */\nchar *av_get_pix_fmt_string(char *buf, int buf_size,\n                            enum AVPixelFormat pix_fmt);\n\n/**\n * Read a line from an image, and write the values of the\n * pixel format component c to dst.\n *\n * @param data the array containing the pointers to the planes of the image\n * @param linesize the array containing the linesizes of the image\n * @param desc the pixel format descriptor for the image\n * @param x the horizontal coordinate of the first pixel to read\n * @param y the vertical coordinate of the first pixel to read\n * @param w the width of the line to read, that is the number of\n * values to write to dst\n * @param read_pal_component if not zero and the format is a paletted\n * format writes the values corresponding to the palette\n * component c in data[1] to dst, rather than the palette indexes in\n * data[0]. The behavior is undefined if the format is not paletted.\n */\nvoid av_read_image_line(uint16_t *dst, const uint8_t *data[4],\n                        const int linesize[4], const AVPixFmtDescriptor *desc,\n                        int x, int y, int c, int w, int read_pal_component);\n\n/**\n * Write the values from src to the pixel format component c of an\n * image line.\n *\n * @param src array containing the values to write\n * @param data the array containing the pointers to the planes of the\n * image to write into. It is supposed to be zeroed.\n * @param linesize the array containing the linesizes of the image\n * @param desc the pixel format descriptor for the image\n * @param x the horizontal coordinate of the first pixel to write\n * @param y the vertical coordinate of the first pixel to write\n * @param w the width of the line to write, that is the number of\n * values to write to the image line\n */\nvoid av_write_image_line(const uint16_t *src, uint8_t *data[4],\n                         const int linesize[4], const AVPixFmtDescriptor *desc,\n                         int x, int y, int c, int w);\n\n/**\n * Utility function to swap the endianness of a pixel format.\n *\n * @param[in]  pix_fmt the pixel format\n *\n * @return pixel format with swapped endianness if it exists,\n * otherwise AV_PIX_FMT_NONE\n */\nenum AVPixelFormat av_pix_fmt_swap_endianness(enum AVPixelFormat pix_fmt);\n\n#define FF_LOSS_RESOLUTION  0x0001 /**< loss due to resolution change */\n#define FF_LOSS_DEPTH       0x0002 /**< loss due to color depth change */\n#define FF_LOSS_COLORSPACE  0x0004 /**< loss due to color space conversion */\n#define FF_LOSS_ALPHA       0x0008 /**< loss of alpha bits */\n#define FF_LOSS_COLORQUANT  0x0010 /**< loss due to color quantization */\n#define FF_LOSS_CHROMA      0x0020 /**< loss of chroma (e.g. RGB to gray conversion) */\n\n/**\n * Compute what kind of losses will occur when converting from one specific\n * pixel format to another.\n * When converting from one pixel format to another, information loss may occur.\n * For example, when converting from RGB24 to GRAY, the color information will\n * be lost. Similarly, other losses occur when converting from some formats to\n * other formats. These losses can involve loss of chroma, but also loss of\n * resolution, loss of color depth, loss due to the color space conversion, loss\n * of the alpha bits or loss due to color quantization.\n * av_get_fix_fmt_loss() informs you about the various types of losses\n * which will occur when converting from one pixel format to another.\n *\n * @param[in] dst_pix_fmt destination pixel format\n * @param[in] src_pix_fmt source pixel format\n * @param[in] has_alpha Whether the source pixel format alpha channel is used.\n * @return Combination of flags informing you what kind of losses will occur\n * (maximum loss for an invalid dst_pix_fmt).\n */\nint av_get_pix_fmt_loss(enum AVPixelFormat dst_pix_fmt,\n                        enum AVPixelFormat src_pix_fmt,\n                        int has_alpha);\n\n/**\n * Compute what kind of losses will occur when converting from one specific\n * pixel format to another.\n * When converting from one pixel format to another, information loss may occur.\n * For example, when converting from RGB24 to GRAY, the color information will\n * be lost. Similarly, other losses occur when converting from some formats to\n * other formats. These losses can involve loss of chroma, but also loss of\n * resolution, loss of color depth, loss due to the color space conversion, loss\n * of the alpha bits or loss due to color quantization.\n * av_get_fix_fmt_loss() informs you about the various types of losses\n * which will occur when converting from one pixel format to another.\n *\n * @param[in] dst_pix_fmt destination pixel format\n * @param[in] src_pix_fmt source pixel format\n * @param[in] has_alpha Whether the source pixel format alpha channel is used.\n * @return Combination of flags informing you what kind of losses will occur\n * (maximum loss for an invalid dst_pix_fmt).\n */\nenum AVPixelFormat av_find_best_pix_fmt_of_2(enum AVPixelFormat dst_pix_fmt1, enum AVPixelFormat dst_pix_fmt2,\n                                             enum AVPixelFormat src_pix_fmt, int has_alpha, int *loss_ptr);\n\n#endif /* AVUTIL_PIXDESC_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/pixelutils.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_PIXELUTILS_H\n#define AVUTIL_PIXELUTILS_H\n\n#include <stddef.h>\n#include <stdint.h>\n#include \"common.h\"\n\n/**\n * Sum of abs(src1[x] - src2[x])\n */\ntypedef int (*av_pixelutils_sad_fn)(const uint8_t *src1, ptrdiff_t stride1,\n                                    const uint8_t *src2, ptrdiff_t stride2);\n\n/**\n * Get a potentially optimized pointer to a Sum-of-absolute-differences\n * function (see the av_pixelutils_sad_fn prototype).\n *\n * @param w_bits  1<<w_bits is the requested width of the block size\n * @param h_bits  1<<h_bits is the requested height of the block size\n * @param aligned If set to 2, the returned sad function will assume src1 and\n *                src2 addresses are aligned on the block size.\n *                If set to 1, the returned sad function will assume src1 is\n *                aligned on the block size.\n *                If set to 0, the returned sad function assume no particular\n *                alignment.\n * @param log_ctx context used for logging, can be NULL\n *\n * @return a pointer to the SAD function or NULL in case of error (because of\n *         invalid parameters)\n */\nav_pixelutils_sad_fn av_pixelutils_get_sad_fn(int w_bits, int h_bits,\n                                              int aligned, void *log_ctx);\n\n#endif /* AVUTIL_PIXELUTILS_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/pixfmt.h",
    "content": "/*\n * copyright (c) 2006 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_PIXFMT_H\n#define AVUTIL_PIXFMT_H\n\n/**\n * @file\n * pixel format definitions\n */\n\n#include \"libavutil/avconfig.h\"\n#include \"version.h\"\n\n#define AVPALETTE_SIZE 1024\n#define AVPALETTE_COUNT 256\n\n/**\n * Pixel format.\n *\n * @note\n * AV_PIX_FMT_RGB32 is handled in an endian-specific manner. An RGBA\n * color is put together as:\n *  (A << 24) | (R << 16) | (G << 8) | B\n * This is stored as BGRA on little-endian CPU architectures and ARGB on\n * big-endian CPUs.\n *\n * @par\n * When the pixel format is palettized RGB32 (AV_PIX_FMT_PAL8), the palettized\n * image data is stored in AVFrame.data[0]. The palette is transported in\n * AVFrame.data[1], is 1024 bytes long (256 4-byte entries) and is\n * formatted the same as in AV_PIX_FMT_RGB32 described above (i.e., it is\n * also endian-specific). Note also that the individual RGB32 palette\n * components stored in AVFrame.data[1] should be in the range 0..255.\n * This is important as many custom PAL8 video codecs that were designed\n * to run on the IBM VGA graphics adapter use 6-bit palette components.\n *\n * @par\n * For all the 8 bits per pixel formats, an RGB32 palette is in data[1] like\n * for pal8. This palette is filled in automatically by the function\n * allocating the picture.\n */\nenum AVPixelFormat {\n    AV_PIX_FMT_NONE = -1,\n    AV_PIX_FMT_YUV420P,   ///< planar YUV 4:2:0, 12bpp, (1 Cr & Cb sample per 2x2 Y samples)\n    AV_PIX_FMT_YUYV422,   ///< packed YUV 4:2:2, 16bpp, Y0 Cb Y1 Cr\n    AV_PIX_FMT_RGB24,     ///< packed RGB 8:8:8, 24bpp, RGBRGB...\n    AV_PIX_FMT_BGR24,     ///< packed RGB 8:8:8, 24bpp, BGRBGR...\n    AV_PIX_FMT_YUV422P,   ///< planar YUV 4:2:2, 16bpp, (1 Cr & Cb sample per 2x1 Y samples)\n    AV_PIX_FMT_YUV444P,   ///< planar YUV 4:4:4, 24bpp, (1 Cr & Cb sample per 1x1 Y samples)\n    AV_PIX_FMT_YUV410P,   ///< planar YUV 4:1:0,  9bpp, (1 Cr & Cb sample per 4x4 Y samples)\n    AV_PIX_FMT_YUV411P,   ///< planar YUV 4:1:1, 12bpp, (1 Cr & Cb sample per 4x1 Y samples)\n    AV_PIX_FMT_GRAY8,     ///<        Y        ,  8bpp\n    AV_PIX_FMT_MONOWHITE, ///<        Y        ,  1bpp, 0 is white, 1 is black, in each byte pixels are ordered from the msb to the lsb\n    AV_PIX_FMT_MONOBLACK, ///<        Y        ,  1bpp, 0 is black, 1 is white, in each byte pixels are ordered from the msb to the lsb\n    AV_PIX_FMT_PAL8,      ///< 8 bits with AV_PIX_FMT_RGB32 palette\n    AV_PIX_FMT_YUVJ420P,  ///< planar YUV 4:2:0, 12bpp, full scale (JPEG), deprecated in favor of AV_PIX_FMT_YUV420P and setting color_range\n    AV_PIX_FMT_YUVJ422P,  ///< planar YUV 4:2:2, 16bpp, full scale (JPEG), deprecated in favor of AV_PIX_FMT_YUV422P and setting color_range\n    AV_PIX_FMT_YUVJ444P,  ///< planar YUV 4:4:4, 24bpp, full scale (JPEG), deprecated in favor of AV_PIX_FMT_YUV444P and setting color_range\n    AV_PIX_FMT_UYVY422,   ///< packed YUV 4:2:2, 16bpp, Cb Y0 Cr Y1\n    AV_PIX_FMT_UYYVYY411, ///< packed YUV 4:1:1, 12bpp, Cb Y0 Y1 Cr Y2 Y3\n    AV_PIX_FMT_BGR8,      ///< packed RGB 3:3:2,  8bpp, (msb)2B 3G 3R(lsb)\n    AV_PIX_FMT_BGR4,      ///< packed RGB 1:2:1 bitstream,  4bpp, (msb)1B 2G 1R(lsb), a byte contains two pixels, the first pixel in the byte is the one composed by the 4 msb bits\n    AV_PIX_FMT_BGR4_BYTE, ///< packed RGB 1:2:1,  8bpp, (msb)1B 2G 1R(lsb)\n    AV_PIX_FMT_RGB8,      ///< packed RGB 3:3:2,  8bpp, (msb)2R 3G 3B(lsb)\n    AV_PIX_FMT_RGB4,      ///< packed RGB 1:2:1 bitstream,  4bpp, (msb)1R 2G 1B(lsb), a byte contains two pixels, the first pixel in the byte is the one composed by the 4 msb bits\n    AV_PIX_FMT_RGB4_BYTE, ///< packed RGB 1:2:1,  8bpp, (msb)1R 2G 1B(lsb)\n    AV_PIX_FMT_NV12,      ///< planar YUV 4:2:0, 12bpp, 1 plane for Y and 1 plane for the UV components, which are interleaved (first byte U and the following byte V)\n    AV_PIX_FMT_NV21,      ///< as above, but U and V bytes are swapped\n\n    AV_PIX_FMT_ARGB,      ///< packed ARGB 8:8:8:8, 32bpp, ARGBARGB...\n    AV_PIX_FMT_RGBA,      ///< packed RGBA 8:8:8:8, 32bpp, RGBARGBA...\n    AV_PIX_FMT_ABGR,      ///< packed ABGR 8:8:8:8, 32bpp, ABGRABGR...\n    AV_PIX_FMT_BGRA,      ///< packed BGRA 8:8:8:8, 32bpp, BGRABGRA...\n\n    AV_PIX_FMT_GRAY16BE,  ///<        Y        , 16bpp, big-endian\n    AV_PIX_FMT_GRAY16LE,  ///<        Y        , 16bpp, little-endian\n    AV_PIX_FMT_YUV440P,   ///< planar YUV 4:4:0 (1 Cr & Cb sample per 1x2 Y samples)\n    AV_PIX_FMT_YUVJ440P,  ///< planar YUV 4:4:0 full scale (JPEG), deprecated in favor of AV_PIX_FMT_YUV440P and setting color_range\n    AV_PIX_FMT_YUVA420P,  ///< planar YUV 4:2:0, 20bpp, (1 Cr & Cb sample per 2x2 Y & A samples)\n    AV_PIX_FMT_RGB48BE,   ///< packed RGB 16:16:16, 48bpp, 16R, 16G, 16B, the 2-byte value for each R/G/B component is stored as big-endian\n    AV_PIX_FMT_RGB48LE,   ///< packed RGB 16:16:16, 48bpp, 16R, 16G, 16B, the 2-byte value for each R/G/B component is stored as little-endian\n\n    AV_PIX_FMT_RGB565BE,  ///< packed RGB 5:6:5, 16bpp, (msb)   5R 6G 5B(lsb), big-endian\n    AV_PIX_FMT_RGB565LE,  ///< packed RGB 5:6:5, 16bpp, (msb)   5R 6G 5B(lsb), little-endian\n    AV_PIX_FMT_RGB555BE,  ///< packed RGB 5:5:5, 16bpp, (msb)1X 5R 5G 5B(lsb), big-endian   , X=unused/undefined\n    AV_PIX_FMT_RGB555LE,  ///< packed RGB 5:5:5, 16bpp, (msb)1X 5R 5G 5B(lsb), little-endian, X=unused/undefined\n\n    AV_PIX_FMT_BGR565BE,  ///< packed BGR 5:6:5, 16bpp, (msb)   5B 6G 5R(lsb), big-endian\n    AV_PIX_FMT_BGR565LE,  ///< packed BGR 5:6:5, 16bpp, (msb)   5B 6G 5R(lsb), little-endian\n    AV_PIX_FMT_BGR555BE,  ///< packed BGR 5:5:5, 16bpp, (msb)1X 5B 5G 5R(lsb), big-endian   , X=unused/undefined\n    AV_PIX_FMT_BGR555LE,  ///< packed BGR 5:5:5, 16bpp, (msb)1X 5B 5G 5R(lsb), little-endian, X=unused/undefined\n\n#if FF_API_VAAPI\n    /** @name Deprecated pixel formats */\n    /**@{*/\n    AV_PIX_FMT_VAAPI_MOCO, ///< HW acceleration through VA API at motion compensation entry-point, Picture.data[3] contains a vaapi_render_state struct which contains macroblocks as well as various fields extracted from headers\n    AV_PIX_FMT_VAAPI_IDCT, ///< HW acceleration through VA API at IDCT entry-point, Picture.data[3] contains a vaapi_render_state struct which contains fields extracted from headers\n    AV_PIX_FMT_VAAPI_VLD,  ///< HW decoding through VA API, Picture.data[3] contains a VASurfaceID\n    /**@}*/\n    AV_PIX_FMT_VAAPI = AV_PIX_FMT_VAAPI_VLD,\n#else\n    /**\n     *  Hardware acceleration through VA-API, data[3] contains a\n     *  VASurfaceID.\n     */\n    AV_PIX_FMT_VAAPI,\n#endif\n\n    AV_PIX_FMT_YUV420P16LE,  ///< planar YUV 4:2:0, 24bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian\n    AV_PIX_FMT_YUV420P16BE,  ///< planar YUV 4:2:0, 24bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian\n    AV_PIX_FMT_YUV422P16LE,  ///< planar YUV 4:2:2, 32bpp, (1 Cr & Cb sample per 2x1 Y samples), little-endian\n    AV_PIX_FMT_YUV422P16BE,  ///< planar YUV 4:2:2, 32bpp, (1 Cr & Cb sample per 2x1 Y samples), big-endian\n    AV_PIX_FMT_YUV444P16LE,  ///< planar YUV 4:4:4, 48bpp, (1 Cr & Cb sample per 1x1 Y samples), little-endian\n    AV_PIX_FMT_YUV444P16BE,  ///< planar YUV 4:4:4, 48bpp, (1 Cr & Cb sample per 1x1 Y samples), big-endian\n    AV_PIX_FMT_DXVA2_VLD,    ///< HW decoding through DXVA2, Picture.data[3] contains a LPDIRECT3DSURFACE9 pointer\n\n    AV_PIX_FMT_RGB444LE,  ///< packed RGB 4:4:4, 16bpp, (msb)4X 4R 4G 4B(lsb), little-endian, X=unused/undefined\n    AV_PIX_FMT_RGB444BE,  ///< packed RGB 4:4:4, 16bpp, (msb)4X 4R 4G 4B(lsb), big-endian,    X=unused/undefined\n    AV_PIX_FMT_BGR444LE,  ///< packed BGR 4:4:4, 16bpp, (msb)4X 4B 4G 4R(lsb), little-endian, X=unused/undefined\n    AV_PIX_FMT_BGR444BE,  ///< packed BGR 4:4:4, 16bpp, (msb)4X 4B 4G 4R(lsb), big-endian,    X=unused/undefined\n    AV_PIX_FMT_YA8,       ///< 8 bits gray, 8 bits alpha\n\n    AV_PIX_FMT_Y400A = AV_PIX_FMT_YA8, ///< alias for AV_PIX_FMT_YA8\n    AV_PIX_FMT_GRAY8A= AV_PIX_FMT_YA8, ///< alias for AV_PIX_FMT_YA8\n\n    AV_PIX_FMT_BGR48BE,   ///< packed RGB 16:16:16, 48bpp, 16B, 16G, 16R, the 2-byte value for each R/G/B component is stored as big-endian\n    AV_PIX_FMT_BGR48LE,   ///< packed RGB 16:16:16, 48bpp, 16B, 16G, 16R, the 2-byte value for each R/G/B component is stored as little-endian\n\n    /**\n     * The following 12 formats have the disadvantage of needing 1 format for each bit depth.\n     * Notice that each 9/10 bits sample is stored in 16 bits with extra padding.\n     * If you want to support multiple bit depths, then using AV_PIX_FMT_YUV420P16* with the bpp stored separately is better.\n     */\n    AV_PIX_FMT_YUV420P9BE, ///< planar YUV 4:2:0, 13.5bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian\n    AV_PIX_FMT_YUV420P9LE, ///< planar YUV 4:2:0, 13.5bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian\n    AV_PIX_FMT_YUV420P10BE,///< planar YUV 4:2:0, 15bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian\n    AV_PIX_FMT_YUV420P10LE,///< planar YUV 4:2:0, 15bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian\n    AV_PIX_FMT_YUV422P10BE,///< planar YUV 4:2:2, 20bpp, (1 Cr & Cb sample per 2x1 Y samples), big-endian\n    AV_PIX_FMT_YUV422P10LE,///< planar YUV 4:2:2, 20bpp, (1 Cr & Cb sample per 2x1 Y samples), little-endian\n    AV_PIX_FMT_YUV444P9BE, ///< planar YUV 4:4:4, 27bpp, (1 Cr & Cb sample per 1x1 Y samples), big-endian\n    AV_PIX_FMT_YUV444P9LE, ///< planar YUV 4:4:4, 27bpp, (1 Cr & Cb sample per 1x1 Y samples), little-endian\n    AV_PIX_FMT_YUV444P10BE,///< planar YUV 4:4:4, 30bpp, (1 Cr & Cb sample per 1x1 Y samples), big-endian\n    AV_PIX_FMT_YUV444P10LE,///< planar YUV 4:4:4, 30bpp, (1 Cr & Cb sample per 1x1 Y samples), little-endian\n    AV_PIX_FMT_YUV422P9BE, ///< planar YUV 4:2:2, 18bpp, (1 Cr & Cb sample per 2x1 Y samples), big-endian\n    AV_PIX_FMT_YUV422P9LE, ///< planar YUV 4:2:2, 18bpp, (1 Cr & Cb sample per 2x1 Y samples), little-endian\n    AV_PIX_FMT_GBRP,      ///< planar GBR 4:4:4 24bpp\n    AV_PIX_FMT_GBR24P = AV_PIX_FMT_GBRP, // alias for #AV_PIX_FMT_GBRP\n    AV_PIX_FMT_GBRP9BE,   ///< planar GBR 4:4:4 27bpp, big-endian\n    AV_PIX_FMT_GBRP9LE,   ///< planar GBR 4:4:4 27bpp, little-endian\n    AV_PIX_FMT_GBRP10BE,  ///< planar GBR 4:4:4 30bpp, big-endian\n    AV_PIX_FMT_GBRP10LE,  ///< planar GBR 4:4:4 30bpp, little-endian\n    AV_PIX_FMT_GBRP16BE,  ///< planar GBR 4:4:4 48bpp, big-endian\n    AV_PIX_FMT_GBRP16LE,  ///< planar GBR 4:4:4 48bpp, little-endian\n    AV_PIX_FMT_YUVA422P,  ///< planar YUV 4:2:2 24bpp, (1 Cr & Cb sample per 2x1 Y & A samples)\n    AV_PIX_FMT_YUVA444P,  ///< planar YUV 4:4:4 32bpp, (1 Cr & Cb sample per 1x1 Y & A samples)\n    AV_PIX_FMT_YUVA420P9BE,  ///< planar YUV 4:2:0 22.5bpp, (1 Cr & Cb sample per 2x2 Y & A samples), big-endian\n    AV_PIX_FMT_YUVA420P9LE,  ///< planar YUV 4:2:0 22.5bpp, (1 Cr & Cb sample per 2x2 Y & A samples), little-endian\n    AV_PIX_FMT_YUVA422P9BE,  ///< planar YUV 4:2:2 27bpp, (1 Cr & Cb sample per 2x1 Y & A samples), big-endian\n    AV_PIX_FMT_YUVA422P9LE,  ///< planar YUV 4:2:2 27bpp, (1 Cr & Cb sample per 2x1 Y & A samples), little-endian\n    AV_PIX_FMT_YUVA444P9BE,  ///< planar YUV 4:4:4 36bpp, (1 Cr & Cb sample per 1x1 Y & A samples), big-endian\n    AV_PIX_FMT_YUVA444P9LE,  ///< planar YUV 4:4:4 36bpp, (1 Cr & Cb sample per 1x1 Y & A samples), little-endian\n    AV_PIX_FMT_YUVA420P10BE, ///< planar YUV 4:2:0 25bpp, (1 Cr & Cb sample per 2x2 Y & A samples, big-endian)\n    AV_PIX_FMT_YUVA420P10LE, ///< planar YUV 4:2:0 25bpp, (1 Cr & Cb sample per 2x2 Y & A samples, little-endian)\n    AV_PIX_FMT_YUVA422P10BE, ///< planar YUV 4:2:2 30bpp, (1 Cr & Cb sample per 2x1 Y & A samples, big-endian)\n    AV_PIX_FMT_YUVA422P10LE, ///< planar YUV 4:2:2 30bpp, (1 Cr & Cb sample per 2x1 Y & A samples, little-endian)\n    AV_PIX_FMT_YUVA444P10BE, ///< planar YUV 4:4:4 40bpp, (1 Cr & Cb sample per 1x1 Y & A samples, big-endian)\n    AV_PIX_FMT_YUVA444P10LE, ///< planar YUV 4:4:4 40bpp, (1 Cr & Cb sample per 1x1 Y & A samples, little-endian)\n    AV_PIX_FMT_YUVA420P16BE, ///< planar YUV 4:2:0 40bpp, (1 Cr & Cb sample per 2x2 Y & A samples, big-endian)\n    AV_PIX_FMT_YUVA420P16LE, ///< planar YUV 4:2:0 40bpp, (1 Cr & Cb sample per 2x2 Y & A samples, little-endian)\n    AV_PIX_FMT_YUVA422P16BE, ///< planar YUV 4:2:2 48bpp, (1 Cr & Cb sample per 2x1 Y & A samples, big-endian)\n    AV_PIX_FMT_YUVA422P16LE, ///< planar YUV 4:2:2 48bpp, (1 Cr & Cb sample per 2x1 Y & A samples, little-endian)\n    AV_PIX_FMT_YUVA444P16BE, ///< planar YUV 4:4:4 64bpp, (1 Cr & Cb sample per 1x1 Y & A samples, big-endian)\n    AV_PIX_FMT_YUVA444P16LE, ///< planar YUV 4:4:4 64bpp, (1 Cr & Cb sample per 1x1 Y & A samples, little-endian)\n\n    AV_PIX_FMT_VDPAU,     ///< HW acceleration through VDPAU, Picture.data[3] contains a VdpVideoSurface\n\n    AV_PIX_FMT_XYZ12LE,      ///< packed XYZ 4:4:4, 36 bpp, (msb) 12X, 12Y, 12Z (lsb), the 2-byte value for each X/Y/Z is stored as little-endian, the 4 lower bits are set to 0\n    AV_PIX_FMT_XYZ12BE,      ///< packed XYZ 4:4:4, 36 bpp, (msb) 12X, 12Y, 12Z (lsb), the 2-byte value for each X/Y/Z is stored as big-endian, the 4 lower bits are set to 0\n    AV_PIX_FMT_NV16,         ///< interleaved chroma YUV 4:2:2, 16bpp, (1 Cr & Cb sample per 2x1 Y samples)\n    AV_PIX_FMT_NV20LE,       ///< interleaved chroma YUV 4:2:2, 20bpp, (1 Cr & Cb sample per 2x1 Y samples), little-endian\n    AV_PIX_FMT_NV20BE,       ///< interleaved chroma YUV 4:2:2, 20bpp, (1 Cr & Cb sample per 2x1 Y samples), big-endian\n\n    AV_PIX_FMT_RGBA64BE,     ///< packed RGBA 16:16:16:16, 64bpp, 16R, 16G, 16B, 16A, the 2-byte value for each R/G/B/A component is stored as big-endian\n    AV_PIX_FMT_RGBA64LE,     ///< packed RGBA 16:16:16:16, 64bpp, 16R, 16G, 16B, 16A, the 2-byte value for each R/G/B/A component is stored as little-endian\n    AV_PIX_FMT_BGRA64BE,     ///< packed RGBA 16:16:16:16, 64bpp, 16B, 16G, 16R, 16A, the 2-byte value for each R/G/B/A component is stored as big-endian\n    AV_PIX_FMT_BGRA64LE,     ///< packed RGBA 16:16:16:16, 64bpp, 16B, 16G, 16R, 16A, the 2-byte value for each R/G/B/A component is stored as little-endian\n\n    AV_PIX_FMT_YVYU422,   ///< packed YUV 4:2:2, 16bpp, Y0 Cr Y1 Cb\n\n    AV_PIX_FMT_YA16BE,       ///< 16 bits gray, 16 bits alpha (big-endian)\n    AV_PIX_FMT_YA16LE,       ///< 16 bits gray, 16 bits alpha (little-endian)\n\n    AV_PIX_FMT_GBRAP,        ///< planar GBRA 4:4:4:4 32bpp\n    AV_PIX_FMT_GBRAP16BE,    ///< planar GBRA 4:4:4:4 64bpp, big-endian\n    AV_PIX_FMT_GBRAP16LE,    ///< planar GBRA 4:4:4:4 64bpp, little-endian\n    /**\n     *  HW acceleration through QSV, data[3] contains a pointer to the\n     *  mfxFrameSurface1 structure.\n     */\n    AV_PIX_FMT_QSV,\n    /**\n     * HW acceleration though MMAL, data[3] contains a pointer to the\n     * MMAL_BUFFER_HEADER_T structure.\n     */\n    AV_PIX_FMT_MMAL,\n\n    AV_PIX_FMT_D3D11VA_VLD,  ///< HW decoding through Direct3D11 via old API, Picture.data[3] contains a ID3D11VideoDecoderOutputView pointer\n\n    /**\n     * HW acceleration through CUDA. data[i] contain CUdeviceptr pointers\n     * exactly as for system memory frames.\n     */\n    AV_PIX_FMT_CUDA,\n\n    AV_PIX_FMT_0RGB,        ///< packed RGB 8:8:8, 32bpp, XRGBXRGB...   X=unused/undefined\n    AV_PIX_FMT_RGB0,        ///< packed RGB 8:8:8, 32bpp, RGBXRGBX...   X=unused/undefined\n    AV_PIX_FMT_0BGR,        ///< packed BGR 8:8:8, 32bpp, XBGRXBGR...   X=unused/undefined\n    AV_PIX_FMT_BGR0,        ///< packed BGR 8:8:8, 32bpp, BGRXBGRX...   X=unused/undefined\n\n    AV_PIX_FMT_YUV420P12BE, ///< planar YUV 4:2:0,18bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian\n    AV_PIX_FMT_YUV420P12LE, ///< planar YUV 4:2:0,18bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian\n    AV_PIX_FMT_YUV420P14BE, ///< planar YUV 4:2:0,21bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian\n    AV_PIX_FMT_YUV420P14LE, ///< planar YUV 4:2:0,21bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian\n    AV_PIX_FMT_YUV422P12BE, ///< planar YUV 4:2:2,24bpp, (1 Cr & Cb sample per 2x1 Y samples), big-endian\n    AV_PIX_FMT_YUV422P12LE, ///< planar YUV 4:2:2,24bpp, (1 Cr & Cb sample per 2x1 Y samples), little-endian\n    AV_PIX_FMT_YUV422P14BE, ///< planar YUV 4:2:2,28bpp, (1 Cr & Cb sample per 2x1 Y samples), big-endian\n    AV_PIX_FMT_YUV422P14LE, ///< planar YUV 4:2:2,28bpp, (1 Cr & Cb sample per 2x1 Y samples), little-endian\n    AV_PIX_FMT_YUV444P12BE, ///< planar YUV 4:4:4,36bpp, (1 Cr & Cb sample per 1x1 Y samples), big-endian\n    AV_PIX_FMT_YUV444P12LE, ///< planar YUV 4:4:4,36bpp, (1 Cr & Cb sample per 1x1 Y samples), little-endian\n    AV_PIX_FMT_YUV444P14BE, ///< planar YUV 4:4:4,42bpp, (1 Cr & Cb sample per 1x1 Y samples), big-endian\n    AV_PIX_FMT_YUV444P14LE, ///< planar YUV 4:4:4,42bpp, (1 Cr & Cb sample per 1x1 Y samples), little-endian\n    AV_PIX_FMT_GBRP12BE,    ///< planar GBR 4:4:4 36bpp, big-endian\n    AV_PIX_FMT_GBRP12LE,    ///< planar GBR 4:4:4 36bpp, little-endian\n    AV_PIX_FMT_GBRP14BE,    ///< planar GBR 4:4:4 42bpp, big-endian\n    AV_PIX_FMT_GBRP14LE,    ///< planar GBR 4:4:4 42bpp, little-endian\n    AV_PIX_FMT_YUVJ411P,    ///< planar YUV 4:1:1, 12bpp, (1 Cr & Cb sample per 4x1 Y samples) full scale (JPEG), deprecated in favor of AV_PIX_FMT_YUV411P and setting color_range\n\n    AV_PIX_FMT_BAYER_BGGR8,    ///< bayer, BGBG..(odd line), GRGR..(even line), 8-bit samples */\n    AV_PIX_FMT_BAYER_RGGB8,    ///< bayer, RGRG..(odd line), GBGB..(even line), 8-bit samples */\n    AV_PIX_FMT_BAYER_GBRG8,    ///< bayer, GBGB..(odd line), RGRG..(even line), 8-bit samples */\n    AV_PIX_FMT_BAYER_GRBG8,    ///< bayer, GRGR..(odd line), BGBG..(even line), 8-bit samples */\n    AV_PIX_FMT_BAYER_BGGR16LE, ///< bayer, BGBG..(odd line), GRGR..(even line), 16-bit samples, little-endian */\n    AV_PIX_FMT_BAYER_BGGR16BE, ///< bayer, BGBG..(odd line), GRGR..(even line), 16-bit samples, big-endian */\n    AV_PIX_FMT_BAYER_RGGB16LE, ///< bayer, RGRG..(odd line), GBGB..(even line), 16-bit samples, little-endian */\n    AV_PIX_FMT_BAYER_RGGB16BE, ///< bayer, RGRG..(odd line), GBGB..(even line), 16-bit samples, big-endian */\n    AV_PIX_FMT_BAYER_GBRG16LE, ///< bayer, GBGB..(odd line), RGRG..(even line), 16-bit samples, little-endian */\n    AV_PIX_FMT_BAYER_GBRG16BE, ///< bayer, GBGB..(odd line), RGRG..(even line), 16-bit samples, big-endian */\n    AV_PIX_FMT_BAYER_GRBG16LE, ///< bayer, GRGR..(odd line), BGBG..(even line), 16-bit samples, little-endian */\n    AV_PIX_FMT_BAYER_GRBG16BE, ///< bayer, GRGR..(odd line), BGBG..(even line), 16-bit samples, big-endian */\n\n    AV_PIX_FMT_XVMC,///< XVideo Motion Acceleration via common packet passing\n\n    AV_PIX_FMT_YUV440P10LE, ///< planar YUV 4:4:0,20bpp, (1 Cr & Cb sample per 1x2 Y samples), little-endian\n    AV_PIX_FMT_YUV440P10BE, ///< planar YUV 4:4:0,20bpp, (1 Cr & Cb sample per 1x2 Y samples), big-endian\n    AV_PIX_FMT_YUV440P12LE, ///< planar YUV 4:4:0,24bpp, (1 Cr & Cb sample per 1x2 Y samples), little-endian\n    AV_PIX_FMT_YUV440P12BE, ///< planar YUV 4:4:0,24bpp, (1 Cr & Cb sample per 1x2 Y samples), big-endian\n    AV_PIX_FMT_AYUV64LE,    ///< packed AYUV 4:4:4,64bpp (1 Cr & Cb sample per 1x1 Y & A samples), little-endian\n    AV_PIX_FMT_AYUV64BE,    ///< packed AYUV 4:4:4,64bpp (1 Cr & Cb sample per 1x1 Y & A samples), big-endian\n\n    AV_PIX_FMT_VIDEOTOOLBOX, ///< hardware decoding through Videotoolbox\n\n    AV_PIX_FMT_P010LE, ///< like NV12, with 10bpp per component, data in the high bits, zeros in the low bits, little-endian\n    AV_PIX_FMT_P010BE, ///< like NV12, with 10bpp per component, data in the high bits, zeros in the low bits, big-endian\n\n    AV_PIX_FMT_GBRAP12BE,  ///< planar GBR 4:4:4:4 48bpp, big-endian\n    AV_PIX_FMT_GBRAP12LE,  ///< planar GBR 4:4:4:4 48bpp, little-endian\n\n    AV_PIX_FMT_GBRAP10BE,  ///< planar GBR 4:4:4:4 40bpp, big-endian\n    AV_PIX_FMT_GBRAP10LE,  ///< planar GBR 4:4:4:4 40bpp, little-endian\n\n    AV_PIX_FMT_MEDIACODEC, ///< hardware decoding through MediaCodec\n\n    AV_PIX_FMT_GRAY12BE,   ///<        Y        , 12bpp, big-endian\n    AV_PIX_FMT_GRAY12LE,   ///<        Y        , 12bpp, little-endian\n    AV_PIX_FMT_GRAY10BE,   ///<        Y        , 10bpp, big-endian\n    AV_PIX_FMT_GRAY10LE,   ///<        Y        , 10bpp, little-endian\n\n    AV_PIX_FMT_P016LE, ///< like NV12, with 16bpp per component, little-endian\n    AV_PIX_FMT_P016BE, ///< like NV12, with 16bpp per component, big-endian\n\n    /**\n     * Hardware surfaces for Direct3D11.\n     *\n     * This is preferred over the legacy AV_PIX_FMT_D3D11VA_VLD. The new D3D11\n     * hwaccel API and filtering support AV_PIX_FMT_D3D11 only.\n     *\n     * data[0] contains a ID3D11Texture2D pointer, and data[1] contains the\n     * texture array index of the frame as intptr_t if the ID3D11Texture2D is\n     * an array texture (or always 0 if it's a normal texture).\n     */\n    AV_PIX_FMT_D3D11,\n\n    AV_PIX_FMT_GRAY9BE,   ///<        Y        , 9bpp, big-endian\n    AV_PIX_FMT_GRAY9LE,   ///<        Y        , 9bpp, little-endian\n\n    AV_PIX_FMT_GBRPF32BE,  ///< IEEE-754 single precision planar GBR 4:4:4,     96bpp, big-endian\n    AV_PIX_FMT_GBRPF32LE,  ///< IEEE-754 single precision planar GBR 4:4:4,     96bpp, little-endian\n    AV_PIX_FMT_GBRAPF32BE, ///< IEEE-754 single precision planar GBRA 4:4:4:4, 128bpp, big-endian\n    AV_PIX_FMT_GBRAPF32LE, ///< IEEE-754 single precision planar GBRA 4:4:4:4, 128bpp, little-endian\n\n    /**\n     * DRM-managed buffers exposed through PRIME buffer sharing.\n     *\n     * data[0] points to an AVDRMFrameDescriptor.\n     */\n    AV_PIX_FMT_DRM_PRIME,\n    /**\n     * Hardware surfaces for OpenCL.\n     *\n     * data[i] contain 2D image objects (typed in C as cl_mem, used\n     * in OpenCL as image2d_t) for each plane of the surface.\n     */\n    AV_PIX_FMT_OPENCL,\n\n    AV_PIX_FMT_NB         ///< number of pixel formats, DO NOT USE THIS if you want to link with shared libav* because the number of formats might differ between versions\n};\n\n#if AV_HAVE_BIGENDIAN\n#   define AV_PIX_FMT_NE(be, le) AV_PIX_FMT_##be\n#else\n#   define AV_PIX_FMT_NE(be, le) AV_PIX_FMT_##le\n#endif\n\n#define AV_PIX_FMT_RGB32   AV_PIX_FMT_NE(ARGB, BGRA)\n#define AV_PIX_FMT_RGB32_1 AV_PIX_FMT_NE(RGBA, ABGR)\n#define AV_PIX_FMT_BGR32   AV_PIX_FMT_NE(ABGR, RGBA)\n#define AV_PIX_FMT_BGR32_1 AV_PIX_FMT_NE(BGRA, ARGB)\n#define AV_PIX_FMT_0RGB32  AV_PIX_FMT_NE(0RGB, BGR0)\n#define AV_PIX_FMT_0BGR32  AV_PIX_FMT_NE(0BGR, RGB0)\n\n#define AV_PIX_FMT_GRAY9  AV_PIX_FMT_NE(GRAY9BE,  GRAY9LE)\n#define AV_PIX_FMT_GRAY10 AV_PIX_FMT_NE(GRAY10BE, GRAY10LE)\n#define AV_PIX_FMT_GRAY12 AV_PIX_FMT_NE(GRAY12BE, GRAY12LE)\n#define AV_PIX_FMT_GRAY16 AV_PIX_FMT_NE(GRAY16BE, GRAY16LE)\n#define AV_PIX_FMT_YA16   AV_PIX_FMT_NE(YA16BE,   YA16LE)\n#define AV_PIX_FMT_RGB48  AV_PIX_FMT_NE(RGB48BE,  RGB48LE)\n#define AV_PIX_FMT_RGB565 AV_PIX_FMT_NE(RGB565BE, RGB565LE)\n#define AV_PIX_FMT_RGB555 AV_PIX_FMT_NE(RGB555BE, RGB555LE)\n#define AV_PIX_FMT_RGB444 AV_PIX_FMT_NE(RGB444BE, RGB444LE)\n#define AV_PIX_FMT_RGBA64 AV_PIX_FMT_NE(RGBA64BE, RGBA64LE)\n#define AV_PIX_FMT_BGR48  AV_PIX_FMT_NE(BGR48BE,  BGR48LE)\n#define AV_PIX_FMT_BGR565 AV_PIX_FMT_NE(BGR565BE, BGR565LE)\n#define AV_PIX_FMT_BGR555 AV_PIX_FMT_NE(BGR555BE, BGR555LE)\n#define AV_PIX_FMT_BGR444 AV_PIX_FMT_NE(BGR444BE, BGR444LE)\n#define AV_PIX_FMT_BGRA64 AV_PIX_FMT_NE(BGRA64BE, BGRA64LE)\n\n#define AV_PIX_FMT_YUV420P9  AV_PIX_FMT_NE(YUV420P9BE , YUV420P9LE)\n#define AV_PIX_FMT_YUV422P9  AV_PIX_FMT_NE(YUV422P9BE , YUV422P9LE)\n#define AV_PIX_FMT_YUV444P9  AV_PIX_FMT_NE(YUV444P9BE , YUV444P9LE)\n#define AV_PIX_FMT_YUV420P10 AV_PIX_FMT_NE(YUV420P10BE, YUV420P10LE)\n#define AV_PIX_FMT_YUV422P10 AV_PIX_FMT_NE(YUV422P10BE, YUV422P10LE)\n#define AV_PIX_FMT_YUV440P10 AV_PIX_FMT_NE(YUV440P10BE, YUV440P10LE)\n#define AV_PIX_FMT_YUV444P10 AV_PIX_FMT_NE(YUV444P10BE, YUV444P10LE)\n#define AV_PIX_FMT_YUV420P12 AV_PIX_FMT_NE(YUV420P12BE, YUV420P12LE)\n#define AV_PIX_FMT_YUV422P12 AV_PIX_FMT_NE(YUV422P12BE, YUV422P12LE)\n#define AV_PIX_FMT_YUV440P12 AV_PIX_FMT_NE(YUV440P12BE, YUV440P12LE)\n#define AV_PIX_FMT_YUV444P12 AV_PIX_FMT_NE(YUV444P12BE, YUV444P12LE)\n#define AV_PIX_FMT_YUV420P14 AV_PIX_FMT_NE(YUV420P14BE, YUV420P14LE)\n#define AV_PIX_FMT_YUV422P14 AV_PIX_FMT_NE(YUV422P14BE, YUV422P14LE)\n#define AV_PIX_FMT_YUV444P14 AV_PIX_FMT_NE(YUV444P14BE, YUV444P14LE)\n#define AV_PIX_FMT_YUV420P16 AV_PIX_FMT_NE(YUV420P16BE, YUV420P16LE)\n#define AV_PIX_FMT_YUV422P16 AV_PIX_FMT_NE(YUV422P16BE, YUV422P16LE)\n#define AV_PIX_FMT_YUV444P16 AV_PIX_FMT_NE(YUV444P16BE, YUV444P16LE)\n\n#define AV_PIX_FMT_GBRP9     AV_PIX_FMT_NE(GBRP9BE ,    GBRP9LE)\n#define AV_PIX_FMT_GBRP10    AV_PIX_FMT_NE(GBRP10BE,    GBRP10LE)\n#define AV_PIX_FMT_GBRP12    AV_PIX_FMT_NE(GBRP12BE,    GBRP12LE)\n#define AV_PIX_FMT_GBRP14    AV_PIX_FMT_NE(GBRP14BE,    GBRP14LE)\n#define AV_PIX_FMT_GBRP16    AV_PIX_FMT_NE(GBRP16BE,    GBRP16LE)\n#define AV_PIX_FMT_GBRAP10   AV_PIX_FMT_NE(GBRAP10BE,   GBRAP10LE)\n#define AV_PIX_FMT_GBRAP12   AV_PIX_FMT_NE(GBRAP12BE,   GBRAP12LE)\n#define AV_PIX_FMT_GBRAP16   AV_PIX_FMT_NE(GBRAP16BE,   GBRAP16LE)\n\n#define AV_PIX_FMT_BAYER_BGGR16 AV_PIX_FMT_NE(BAYER_BGGR16BE,    BAYER_BGGR16LE)\n#define AV_PIX_FMT_BAYER_RGGB16 AV_PIX_FMT_NE(BAYER_RGGB16BE,    BAYER_RGGB16LE)\n#define AV_PIX_FMT_BAYER_GBRG16 AV_PIX_FMT_NE(BAYER_GBRG16BE,    BAYER_GBRG16LE)\n#define AV_PIX_FMT_BAYER_GRBG16 AV_PIX_FMT_NE(BAYER_GRBG16BE,    BAYER_GRBG16LE)\n\n#define AV_PIX_FMT_GBRPF32    AV_PIX_FMT_NE(GBRPF32BE,  GBRPF32LE)\n#define AV_PIX_FMT_GBRAPF32   AV_PIX_FMT_NE(GBRAPF32BE, GBRAPF32LE)\n\n#define AV_PIX_FMT_YUVA420P9  AV_PIX_FMT_NE(YUVA420P9BE , YUVA420P9LE)\n#define AV_PIX_FMT_YUVA422P9  AV_PIX_FMT_NE(YUVA422P9BE , YUVA422P9LE)\n#define AV_PIX_FMT_YUVA444P9  AV_PIX_FMT_NE(YUVA444P9BE , YUVA444P9LE)\n#define AV_PIX_FMT_YUVA420P10 AV_PIX_FMT_NE(YUVA420P10BE, YUVA420P10LE)\n#define AV_PIX_FMT_YUVA422P10 AV_PIX_FMT_NE(YUVA422P10BE, YUVA422P10LE)\n#define AV_PIX_FMT_YUVA444P10 AV_PIX_FMT_NE(YUVA444P10BE, YUVA444P10LE)\n#define AV_PIX_FMT_YUVA420P16 AV_PIX_FMT_NE(YUVA420P16BE, YUVA420P16LE)\n#define AV_PIX_FMT_YUVA422P16 AV_PIX_FMT_NE(YUVA422P16BE, YUVA422P16LE)\n#define AV_PIX_FMT_YUVA444P16 AV_PIX_FMT_NE(YUVA444P16BE, YUVA444P16LE)\n\n#define AV_PIX_FMT_XYZ12      AV_PIX_FMT_NE(XYZ12BE, XYZ12LE)\n#define AV_PIX_FMT_NV20       AV_PIX_FMT_NE(NV20BE,  NV20LE)\n#define AV_PIX_FMT_AYUV64     AV_PIX_FMT_NE(AYUV64BE, AYUV64LE)\n#define AV_PIX_FMT_P010       AV_PIX_FMT_NE(P010BE,  P010LE)\n#define AV_PIX_FMT_P016       AV_PIX_FMT_NE(P016BE,  P016LE)\n\n/**\n  * Chromaticity coordinates of the source primaries.\n  * These values match the ones defined by ISO/IEC 23001-8_2013 § 7.1.\n  */\nenum AVColorPrimaries {\n    AVCOL_PRI_RESERVED0   = 0,\n    AVCOL_PRI_BT709       = 1,  ///< also ITU-R BT1361 / IEC 61966-2-4 / SMPTE RP177 Annex B\n    AVCOL_PRI_UNSPECIFIED = 2,\n    AVCOL_PRI_RESERVED    = 3,\n    AVCOL_PRI_BT470M      = 4,  ///< also FCC Title 47 Code of Federal Regulations 73.682 (a)(20)\n\n    AVCOL_PRI_BT470BG     = 5,  ///< also ITU-R BT601-6 625 / ITU-R BT1358 625 / ITU-R BT1700 625 PAL & SECAM\n    AVCOL_PRI_SMPTE170M   = 6,  ///< also ITU-R BT601-6 525 / ITU-R BT1358 525 / ITU-R BT1700 NTSC\n    AVCOL_PRI_SMPTE240M   = 7,  ///< functionally identical to above\n    AVCOL_PRI_FILM        = 8,  ///< colour filters using Illuminant C\n    AVCOL_PRI_BT2020      = 9,  ///< ITU-R BT2020\n    AVCOL_PRI_SMPTE428    = 10, ///< SMPTE ST 428-1 (CIE 1931 XYZ)\n    AVCOL_PRI_SMPTEST428_1 = AVCOL_PRI_SMPTE428,\n    AVCOL_PRI_SMPTE431    = 11, ///< SMPTE ST 431-2 (2011) / DCI P3\n    AVCOL_PRI_SMPTE432    = 12, ///< SMPTE ST 432-1 (2010) / P3 D65 / Display P3\n    AVCOL_PRI_JEDEC_P22   = 22, ///< JEDEC P22 phosphors\n    AVCOL_PRI_NB                ///< Not part of ABI\n};\n\n/**\n * Color Transfer Characteristic.\n * These values match the ones defined by ISO/IEC 23001-8_2013 § 7.2.\n */\nenum AVColorTransferCharacteristic {\n    AVCOL_TRC_RESERVED0    = 0,\n    AVCOL_TRC_BT709        = 1,  ///< also ITU-R BT1361\n    AVCOL_TRC_UNSPECIFIED  = 2,\n    AVCOL_TRC_RESERVED     = 3,\n    AVCOL_TRC_GAMMA22      = 4,  ///< also ITU-R BT470M / ITU-R BT1700 625 PAL & SECAM\n    AVCOL_TRC_GAMMA28      = 5,  ///< also ITU-R BT470BG\n    AVCOL_TRC_SMPTE170M    = 6,  ///< also ITU-R BT601-6 525 or 625 / ITU-R BT1358 525 or 625 / ITU-R BT1700 NTSC\n    AVCOL_TRC_SMPTE240M    = 7,\n    AVCOL_TRC_LINEAR       = 8,  ///< \"Linear transfer characteristics\"\n    AVCOL_TRC_LOG          = 9,  ///< \"Logarithmic transfer characteristic (100:1 range)\"\n    AVCOL_TRC_LOG_SQRT     = 10, ///< \"Logarithmic transfer characteristic (100 * Sqrt(10) : 1 range)\"\n    AVCOL_TRC_IEC61966_2_4 = 11, ///< IEC 61966-2-4\n    AVCOL_TRC_BT1361_ECG   = 12, ///< ITU-R BT1361 Extended Colour Gamut\n    AVCOL_TRC_IEC61966_2_1 = 13, ///< IEC 61966-2-1 (sRGB or sYCC)\n    AVCOL_TRC_BT2020_10    = 14, ///< ITU-R BT2020 for 10-bit system\n    AVCOL_TRC_BT2020_12    = 15, ///< ITU-R BT2020 for 12-bit system\n    AVCOL_TRC_SMPTE2084    = 16, ///< SMPTE ST 2084 for 10-, 12-, 14- and 16-bit systems\n    AVCOL_TRC_SMPTEST2084  = AVCOL_TRC_SMPTE2084,\n    AVCOL_TRC_SMPTE428     = 17, ///< SMPTE ST 428-1\n    AVCOL_TRC_SMPTEST428_1 = AVCOL_TRC_SMPTE428,\n    AVCOL_TRC_ARIB_STD_B67 = 18, ///< ARIB STD-B67, known as \"Hybrid log-gamma\"\n    AVCOL_TRC_NB                 ///< Not part of ABI\n};\n\n/**\n * YUV colorspace type.\n * These values match the ones defined by ISO/IEC 23001-8_2013 § 7.3.\n */\nenum AVColorSpace {\n    AVCOL_SPC_RGB         = 0,  ///< order of coefficients is actually GBR, also IEC 61966-2-1 (sRGB)\n    AVCOL_SPC_BT709       = 1,  ///< also ITU-R BT1361 / IEC 61966-2-4 xvYCC709 / SMPTE RP177 Annex B\n    AVCOL_SPC_UNSPECIFIED = 2,\n    AVCOL_SPC_RESERVED    = 3,\n    AVCOL_SPC_FCC         = 4,  ///< FCC Title 47 Code of Federal Regulations 73.682 (a)(20)\n    AVCOL_SPC_BT470BG     = 5,  ///< also ITU-R BT601-6 625 / ITU-R BT1358 625 / ITU-R BT1700 625 PAL & SECAM / IEC 61966-2-4 xvYCC601\n    AVCOL_SPC_SMPTE170M   = 6,  ///< also ITU-R BT601-6 525 / ITU-R BT1358 525 / ITU-R BT1700 NTSC\n    AVCOL_SPC_SMPTE240M   = 7,  ///< functionally identical to above\n    AVCOL_SPC_YCGCO       = 8,  ///< Used by Dirac / VC-2 and H.264 FRext, see ITU-T SG16\n    AVCOL_SPC_YCOCG       = AVCOL_SPC_YCGCO,\n    AVCOL_SPC_BT2020_NCL  = 9,  ///< ITU-R BT2020 non-constant luminance system\n    AVCOL_SPC_BT2020_CL   = 10, ///< ITU-R BT2020 constant luminance system\n    AVCOL_SPC_SMPTE2085   = 11, ///< SMPTE 2085, Y'D'zD'x\n    AVCOL_SPC_CHROMA_DERIVED_NCL = 12, ///< Chromaticity-derived non-constant luminance system\n    AVCOL_SPC_CHROMA_DERIVED_CL = 13, ///< Chromaticity-derived constant luminance system\n    AVCOL_SPC_ICTCP       = 14, ///< ITU-R BT.2100-0, ICtCp\n    AVCOL_SPC_NB                ///< Not part of ABI\n};\n\n/**\n * MPEG vs JPEG YUV range.\n */\nenum AVColorRange {\n    AVCOL_RANGE_UNSPECIFIED = 0,\n    AVCOL_RANGE_MPEG        = 1, ///< the normal 219*2^(n-8) \"MPEG\" YUV ranges\n    AVCOL_RANGE_JPEG        = 2, ///< the normal     2^n-1   \"JPEG\" YUV ranges\n    AVCOL_RANGE_NB               ///< Not part of ABI\n};\n\n/**\n * Location of chroma samples.\n *\n * Illustration showing the location of the first (top left) chroma sample of the\n * image, the left shows only luma, the right\n * shows the location of the chroma sample, the 2 could be imagined to overlay\n * each other but are drawn separately due to limitations of ASCII\n *\n *                1st 2nd       1st 2nd horizontal luma sample positions\n *                 v   v         v   v\n *                 ______        ______\n *1st luma line > |X   X ...    |3 4 X ...     X are luma samples,\n *                |             |1 2           1-6 are possible chroma positions\n *2nd luma line > |X   X ...    |5 6 X ...     0 is undefined/unknown position\n */\nenum AVChromaLocation {\n    AVCHROMA_LOC_UNSPECIFIED = 0,\n    AVCHROMA_LOC_LEFT        = 1, ///< MPEG-2/4 4:2:0, H.264 default for 4:2:0\n    AVCHROMA_LOC_CENTER      = 2, ///< MPEG-1 4:2:0, JPEG 4:2:0, H.263 4:2:0\n    AVCHROMA_LOC_TOPLEFT     = 3, ///< ITU-R 601, SMPTE 274M 296M S314M(DV 4:1:1), mpeg2 4:2:2\n    AVCHROMA_LOC_TOP         = 4,\n    AVCHROMA_LOC_BOTTOMLEFT  = 5,\n    AVCHROMA_LOC_BOTTOM      = 6,\n    AVCHROMA_LOC_NB               ///< Not part of ABI\n};\n\n#endif /* AVUTIL_PIXFMT_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/random_seed.h",
    "content": "/*\n * Copyright (c) 2009 Baptiste Coudurier <baptiste.coudurier@gmail.com>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_RANDOM_SEED_H\n#define AVUTIL_RANDOM_SEED_H\n\n#include <stdint.h>\n/**\n * @addtogroup lavu_crypto\n * @{\n */\n\n/**\n * Get a seed to use in conjunction with random functions.\n * This function tries to provide a good seed at a best effort bases.\n * Its possible to call this function multiple times if more bits are needed.\n * It can be quite slow, which is why it should only be used as seed for a faster\n * PRNG. The quality of the seed depends on the platform.\n */\nuint32_t av_get_random_seed(void);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_RANDOM_SEED_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/rational.h",
    "content": "/*\n * rational numbers\n * Copyright (c) 2003 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * @ingroup lavu_math_rational\n * Utilties for rational number calculation.\n * @author Michael Niedermayer <michaelni@gmx.at>\n */\n\n#ifndef AVUTIL_RATIONAL_H\n#define AVUTIL_RATIONAL_H\n\n#include <stdint.h>\n#include <limits.h>\n#include \"attributes.h\"\n\n/**\n * @defgroup lavu_math_rational AVRational\n * @ingroup lavu_math\n * Rational number calculation.\n *\n * While rational numbers can be expressed as floating-point numbers, the\n * conversion process is a lossy one, so are floating-point operations. On the\n * other hand, the nature of FFmpeg demands highly accurate calculation of\n * timestamps. This set of rational number utilities serves as a generic\n * interface for manipulating rational numbers as pairs of numerators and\n * denominators.\n *\n * Many of the functions that operate on AVRational's have the suffix `_q`, in\n * reference to the mathematical symbol \"ℚ\" (Q) which denotes the set of all\n * rational numbers.\n *\n * @{\n */\n\n/**\n * Rational number (pair of numerator and denominator).\n */\ntypedef struct AVRational{\n    int num; ///< Numerator\n    int den; ///< Denominator\n} AVRational;\n\n/**\n * Create an AVRational.\n *\n * Useful for compilers that do not support compound literals.\n *\n * @note The return value is not reduced.\n * @see av_reduce()\n */\nstatic inline AVRational av_make_q(int num, int den)\n{\n    AVRational r = { num, den };\n    return r;\n}\n\n/**\n * Compare two rationals.\n *\n * @param a First rational\n * @param b Second rational\n *\n * @return One of the following values:\n *         - 0 if `a == b`\n *         - 1 if `a > b`\n *         - -1 if `a < b`\n *         - `INT_MIN` if one of the values is of the form `0 / 0`\n */\nstatic inline int av_cmp_q(AVRational a, AVRational b){\n    const int64_t tmp= a.num * (int64_t)b.den - b.num * (int64_t)a.den;\n\n    if(tmp) return (int)((tmp ^ a.den ^ b.den)>>63)|1;\n    else if(b.den && a.den) return 0;\n    else if(a.num && b.num) return (a.num>>31) - (b.num>>31);\n    else                    return INT_MIN;\n}\n\n/**\n * Convert an AVRational to a `double`.\n * @param a AVRational to convert\n * @return `a` in floating-point form\n * @see av_d2q()\n */\nstatic inline double av_q2d(AVRational a){\n    return a.num / (double) a.den;\n}\n\n/**\n * Reduce a fraction.\n *\n * This is useful for framerate calculations.\n *\n * @param[out] dst_num Destination numerator\n * @param[out] dst_den Destination denominator\n * @param[in]      num Source numerator\n * @param[in]      den Source denominator\n * @param[in]      max Maximum allowed values for `dst_num` & `dst_den`\n * @return 1 if the operation is exact, 0 otherwise\n */\nint av_reduce(int *dst_num, int *dst_den, int64_t num, int64_t den, int64_t max);\n\n/**\n * Multiply two rationals.\n * @param b First rational\n * @param c Second rational\n * @return b*c\n */\nAVRational av_mul_q(AVRational b, AVRational c) av_const;\n\n/**\n * Divide one rational by another.\n * @param b First rational\n * @param c Second rational\n * @return b/c\n */\nAVRational av_div_q(AVRational b, AVRational c) av_const;\n\n/**\n * Add two rationals.\n * @param b First rational\n * @param c Second rational\n * @return b+c\n */\nAVRational av_add_q(AVRational b, AVRational c) av_const;\n\n/**\n * Subtract one rational from another.\n * @param b First rational\n * @param c Second rational\n * @return b-c\n */\nAVRational av_sub_q(AVRational b, AVRational c) av_const;\n\n/**\n * Invert a rational.\n * @param q value\n * @return 1 / q\n */\nstatic av_always_inline AVRational av_inv_q(AVRational q)\n{\n    AVRational r = { q.den, q.num };\n    return r;\n}\n\n/**\n * Convert a double precision floating point number to a rational.\n *\n * In case of infinity, the returned value is expressed as `{1, 0}` or\n * `{-1, 0}` depending on the sign.\n *\n * @param d   `double` to convert\n * @param max Maximum allowed numerator and denominator\n * @return `d` in AVRational form\n * @see av_q2d()\n */\nAVRational av_d2q(double d, int max) av_const;\n\n/**\n * Find which of the two rationals is closer to another rational.\n *\n * @param q     Rational to be compared against\n * @param q1,q2 Rationals to be tested\n * @return One of the following values:\n *         - 1 if `q1` is nearer to `q` than `q2`\n *         - -1 if `q2` is nearer to `q` than `q1`\n *         - 0 if they have the same distance\n */\nint av_nearer_q(AVRational q, AVRational q1, AVRational q2);\n\n/**\n * Find the value in a list of rationals nearest a given reference rational.\n *\n * @param q      Reference rational\n * @param q_list Array of rationals terminated by `{0, 0}`\n * @return Index of the nearest value found in the array\n */\nint av_find_nearest_q_idx(AVRational q, const AVRational* q_list);\n\n/**\n * Convert an AVRational to a IEEE 32-bit `float` expressed in fixed-point\n * format.\n *\n * @param q Rational to be converted\n * @return Equivalent floating-point value, expressed as an unsigned 32-bit\n *         integer.\n * @note The returned value is platform-indepedant.\n */\nuint32_t av_q2intfloat(AVRational q);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_RATIONAL_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/rc4.h",
    "content": "/*\n * RC4 encryption/decryption/pseudo-random number generator\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_RC4_H\n#define AVUTIL_RC4_H\n\n#include <stdint.h>\n\n/**\n * @defgroup lavu_rc4 RC4\n * @ingroup lavu_crypto\n * @{\n */\n\ntypedef struct AVRC4 {\n    uint8_t state[256];\n    int x, y;\n} AVRC4;\n\n/**\n * Allocate an AVRC4 context.\n */\nAVRC4 *av_rc4_alloc(void);\n\n/**\n * @brief Initializes an AVRC4 context.\n *\n * @param key_bits must be a multiple of 8\n * @param decrypt 0 for encryption, 1 for decryption, currently has no effect\n * @return zero on success, negative value otherwise\n */\nint av_rc4_init(struct AVRC4 *d, const uint8_t *key, int key_bits, int decrypt);\n\n/**\n * @brief Encrypts / decrypts using the RC4 algorithm.\n *\n * @param count number of bytes\n * @param dst destination array, can be equal to src\n * @param src source array, can be equal to dst, may be NULL\n * @param iv not (yet) used for RC4, should be NULL\n * @param decrypt 0 for encryption, 1 for decryption, not (yet) used\n */\nvoid av_rc4_crypt(struct AVRC4 *d, uint8_t *dst, const uint8_t *src, int count, uint8_t *iv, int decrypt);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_RC4_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/replaygain.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_REPLAYGAIN_H\n#define AVUTIL_REPLAYGAIN_H\n\n#include <stdint.h>\n\n/**\n * ReplayGain information (see\n * http://wiki.hydrogenaudio.org/index.php?title=ReplayGain_1.0_specification).\n * The size of this struct is a part of the public ABI.\n */\ntypedef struct AVReplayGain {\n    /**\n     * Track replay gain in microbels (divide by 100000 to get the value in dB).\n     * Should be set to INT32_MIN when unknown.\n     */\n    int32_t track_gain;\n    /**\n     * Peak track amplitude, with 100000 representing full scale (but values\n     * may overflow). 0 when unknown.\n     */\n    uint32_t track_peak;\n    /**\n     * Same as track_gain, but for the whole album.\n     */\n    int32_t album_gain;\n    /**\n     * Same as track_peak, but for the whole album,\n     */\n    uint32_t album_peak;\n} AVReplayGain;\n\n#endif /* AVUTIL_REPLAYGAIN_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/ripemd.h",
    "content": "/*\n * Copyright (C) 2007 Michael Niedermayer <michaelni@gmx.at>\n * Copyright (C) 2013 James Almer <jamrial@gmail.com>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * @ingroup lavu_ripemd\n * Public header for RIPEMD hash function implementation.\n */\n\n#ifndef AVUTIL_RIPEMD_H\n#define AVUTIL_RIPEMD_H\n\n#include <stdint.h>\n\n#include \"attributes.h\"\n#include \"version.h\"\n\n/**\n * @defgroup lavu_ripemd RIPEMD\n * @ingroup lavu_hash\n * RIPEMD hash function implementation.\n *\n * @{\n */\n\nextern const int av_ripemd_size;\n\nstruct AVRIPEMD;\n\n/**\n * Allocate an AVRIPEMD context.\n */\nstruct AVRIPEMD *av_ripemd_alloc(void);\n\n/**\n * Initialize RIPEMD hashing.\n *\n * @param context pointer to the function context (of size av_ripemd_size)\n * @param bits    number of bits in digest (128, 160, 256 or 320 bits)\n * @return        zero if initialization succeeded, -1 otherwise\n */\nint av_ripemd_init(struct AVRIPEMD* context, int bits);\n\n/**\n * Update hash value.\n *\n * @param context hash function context\n * @param data    input data to update hash with\n * @param len     input data length\n */\n#if FF_API_CRYPTO_SIZE_T\nvoid av_ripemd_update(struct AVRIPEMD* context, const uint8_t* data, unsigned int len);\n#else\nvoid av_ripemd_update(struct AVRIPEMD* context, const uint8_t* data, size_t len);\n#endif\n\n/**\n * Finish hashing and output digest value.\n *\n * @param context hash function context\n * @param digest  buffer where output digest value is stored\n */\nvoid av_ripemd_final(struct AVRIPEMD* context, uint8_t *digest);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_RIPEMD_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/samplefmt.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_SAMPLEFMT_H\n#define AVUTIL_SAMPLEFMT_H\n\n#include <stdint.h>\n\n#include \"avutil.h\"\n#include \"attributes.h\"\n\n/**\n * @addtogroup lavu_audio\n * @{\n *\n * @defgroup lavu_sampfmts Audio sample formats\n *\n * Audio sample format enumeration and related convenience functions.\n * @{\n */\n\n/**\n * Audio sample formats\n *\n * - The data described by the sample format is always in native-endian order.\n *   Sample values can be expressed by native C types, hence the lack of a signed\n *   24-bit sample format even though it is a common raw audio data format.\n *\n * - The floating-point formats are based on full volume being in the range\n *   [-1.0, 1.0]. Any values outside this range are beyond full volume level.\n *\n * - The data layout as used in av_samples_fill_arrays() and elsewhere in FFmpeg\n *   (such as AVFrame in libavcodec) is as follows:\n *\n * @par\n * For planar sample formats, each audio channel is in a separate data plane,\n * and linesize is the buffer size, in bytes, for a single plane. All data\n * planes must be the same size. For packed sample formats, only the first data\n * plane is used, and samples for each channel are interleaved. In this case,\n * linesize is the buffer size, in bytes, for the 1 plane.\n *\n */\nenum AVSampleFormat {\n    AV_SAMPLE_FMT_NONE = -1,\n    AV_SAMPLE_FMT_U8,          ///< unsigned 8 bits\n    AV_SAMPLE_FMT_S16,         ///< signed 16 bits\n    AV_SAMPLE_FMT_S32,         ///< signed 32 bits\n    AV_SAMPLE_FMT_FLT,         ///< float\n    AV_SAMPLE_FMT_DBL,         ///< double\n\n    AV_SAMPLE_FMT_U8P,         ///< unsigned 8 bits, planar\n    AV_SAMPLE_FMT_S16P,        ///< signed 16 bits, planar\n    AV_SAMPLE_FMT_S32P,        ///< signed 32 bits, planar\n    AV_SAMPLE_FMT_FLTP,        ///< float, planar\n    AV_SAMPLE_FMT_DBLP,        ///< double, planar\n    AV_SAMPLE_FMT_S64,         ///< signed 64 bits\n    AV_SAMPLE_FMT_S64P,        ///< signed 64 bits, planar\n\n    AV_SAMPLE_FMT_NB           ///< Number of sample formats. DO NOT USE if linking dynamically\n};\n\n/**\n * Return the name of sample_fmt, or NULL if sample_fmt is not\n * recognized.\n */\nconst char *av_get_sample_fmt_name(enum AVSampleFormat sample_fmt);\n\n/**\n * Return a sample format corresponding to name, or AV_SAMPLE_FMT_NONE\n * on error.\n */\nenum AVSampleFormat av_get_sample_fmt(const char *name);\n\n/**\n * Return the planar<->packed alternative form of the given sample format, or\n * AV_SAMPLE_FMT_NONE on error. If the passed sample_fmt is already in the\n * requested planar/packed format, the format returned is the same as the\n * input.\n */\nenum AVSampleFormat av_get_alt_sample_fmt(enum AVSampleFormat sample_fmt, int planar);\n\n/**\n * Get the packed alternative form of the given sample format.\n *\n * If the passed sample_fmt is already in packed format, the format returned is\n * the same as the input.\n *\n * @return  the packed alternative form of the given sample format or\n            AV_SAMPLE_FMT_NONE on error.\n */\nenum AVSampleFormat av_get_packed_sample_fmt(enum AVSampleFormat sample_fmt);\n\n/**\n * Get the planar alternative form of the given sample format.\n *\n * If the passed sample_fmt is already in planar format, the format returned is\n * the same as the input.\n *\n * @return  the planar alternative form of the given sample format or\n            AV_SAMPLE_FMT_NONE on error.\n */\nenum AVSampleFormat av_get_planar_sample_fmt(enum AVSampleFormat sample_fmt);\n\n/**\n * Generate a string corresponding to the sample format with\n * sample_fmt, or a header if sample_fmt is negative.\n *\n * @param buf the buffer where to write the string\n * @param buf_size the size of buf\n * @param sample_fmt the number of the sample format to print the\n * corresponding info string, or a negative value to print the\n * corresponding header.\n * @return the pointer to the filled buffer or NULL if sample_fmt is\n * unknown or in case of other errors\n */\nchar *av_get_sample_fmt_string(char *buf, int buf_size, enum AVSampleFormat sample_fmt);\n\n/**\n * Return number of bytes per sample.\n *\n * @param sample_fmt the sample format\n * @return number of bytes per sample or zero if unknown for the given\n * sample format\n */\nint av_get_bytes_per_sample(enum AVSampleFormat sample_fmt);\n\n/**\n * Check if the sample format is planar.\n *\n * @param sample_fmt the sample format to inspect\n * @return 1 if the sample format is planar, 0 if it is interleaved\n */\nint av_sample_fmt_is_planar(enum AVSampleFormat sample_fmt);\n\n/**\n * Get the required buffer size for the given audio parameters.\n *\n * @param[out] linesize calculated linesize, may be NULL\n * @param nb_channels   the number of channels\n * @param nb_samples    the number of samples in a single channel\n * @param sample_fmt    the sample format\n * @param align         buffer size alignment (0 = default, 1 = no alignment)\n * @return              required buffer size, or negative error code on failure\n */\nint av_samples_get_buffer_size(int *linesize, int nb_channels, int nb_samples,\n                               enum AVSampleFormat sample_fmt, int align);\n\n/**\n * @}\n *\n * @defgroup lavu_sampmanip Samples manipulation\n *\n * Functions that manipulate audio samples\n * @{\n */\n\n/**\n * Fill plane data pointers and linesize for samples with sample\n * format sample_fmt.\n *\n * The audio_data array is filled with the pointers to the samples data planes:\n * for planar, set the start point of each channel's data within the buffer,\n * for packed, set the start point of the entire buffer only.\n *\n * The value pointed to by linesize is set to the aligned size of each\n * channel's data buffer for planar layout, or to the aligned size of the\n * buffer for all channels for packed layout.\n *\n * The buffer in buf must be big enough to contain all the samples\n * (use av_samples_get_buffer_size() to compute its minimum size),\n * otherwise the audio_data pointers will point to invalid data.\n *\n * @see enum AVSampleFormat\n * The documentation for AVSampleFormat describes the data layout.\n *\n * @param[out] audio_data  array to be filled with the pointer for each channel\n * @param[out] linesize    calculated linesize, may be NULL\n * @param buf              the pointer to a buffer containing the samples\n * @param nb_channels      the number of channels\n * @param nb_samples       the number of samples in a single channel\n * @param sample_fmt       the sample format\n * @param align            buffer size alignment (0 = default, 1 = no alignment)\n * @return                 >=0 on success or a negative error code on failure\n * @todo return minimum size in bytes required for the buffer in case\n * of success at the next bump\n */\nint av_samples_fill_arrays(uint8_t **audio_data, int *linesize,\n                           const uint8_t *buf,\n                           int nb_channels, int nb_samples,\n                           enum AVSampleFormat sample_fmt, int align);\n\n/**\n * Allocate a samples buffer for nb_samples samples, and fill data pointers and\n * linesize accordingly.\n * The allocated samples buffer can be freed by using av_freep(&audio_data[0])\n * Allocated data will be initialized to silence.\n *\n * @see enum AVSampleFormat\n * The documentation for AVSampleFormat describes the data layout.\n *\n * @param[out] audio_data  array to be filled with the pointer for each channel\n * @param[out] linesize    aligned size for audio buffer(s), may be NULL\n * @param nb_channels      number of audio channels\n * @param nb_samples       number of samples per channel\n * @param align            buffer size alignment (0 = default, 1 = no alignment)\n * @return                 >=0 on success or a negative error code on failure\n * @todo return the size of the allocated buffer in case of success at the next bump\n * @see av_samples_fill_arrays()\n * @see av_samples_alloc_array_and_samples()\n */\nint av_samples_alloc(uint8_t **audio_data, int *linesize, int nb_channels,\n                     int nb_samples, enum AVSampleFormat sample_fmt, int align);\n\n/**\n * Allocate a data pointers array, samples buffer for nb_samples\n * samples, and fill data pointers and linesize accordingly.\n *\n * This is the same as av_samples_alloc(), but also allocates the data\n * pointers array.\n *\n * @see av_samples_alloc()\n */\nint av_samples_alloc_array_and_samples(uint8_t ***audio_data, int *linesize, int nb_channels,\n                                       int nb_samples, enum AVSampleFormat sample_fmt, int align);\n\n/**\n * Copy samples from src to dst.\n *\n * @param dst destination array of pointers to data planes\n * @param src source array of pointers to data planes\n * @param dst_offset offset in samples at which the data will be written to dst\n * @param src_offset offset in samples at which the data will be read from src\n * @param nb_samples number of samples to be copied\n * @param nb_channels number of audio channels\n * @param sample_fmt audio sample format\n */\nint av_samples_copy(uint8_t **dst, uint8_t * const *src, int dst_offset,\n                    int src_offset, int nb_samples, int nb_channels,\n                    enum AVSampleFormat sample_fmt);\n\n/**\n * Fill an audio buffer with silence.\n *\n * @param audio_data  array of pointers to data planes\n * @param offset      offset in samples at which to start filling\n * @param nb_samples  number of samples to fill\n * @param nb_channels number of audio channels\n * @param sample_fmt  audio sample format\n */\nint av_samples_set_silence(uint8_t **audio_data, int offset, int nb_samples,\n                           int nb_channels, enum AVSampleFormat sample_fmt);\n\n/**\n * @}\n * @}\n */\n#endif /* AVUTIL_SAMPLEFMT_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/sha.h",
    "content": "/*\n * Copyright (C) 2007 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * @ingroup lavu_sha\n * Public header for SHA-1 & SHA-256 hash function implementations.\n */\n\n#ifndef AVUTIL_SHA_H\n#define AVUTIL_SHA_H\n\n#include <stddef.h>\n#include <stdint.h>\n\n#include \"attributes.h\"\n#include \"version.h\"\n\n/**\n * @defgroup lavu_sha SHA\n * @ingroup lavu_hash\n * SHA-1 and SHA-256 (Secure Hash Algorithm) hash function implementations.\n *\n * This module supports the following SHA hash functions:\n *\n * - SHA-1: 160 bits\n * - SHA-224: 224 bits, as a variant of SHA-2\n * - SHA-256: 256 bits, as a variant of SHA-2\n *\n * @see For SHA-384, SHA-512, and variants thereof, see @ref lavu_sha512.\n *\n * @{\n */\n\nextern const int av_sha_size;\n\nstruct AVSHA;\n\n/**\n * Allocate an AVSHA context.\n */\nstruct AVSHA *av_sha_alloc(void);\n\n/**\n * Initialize SHA-1 or SHA-2 hashing.\n *\n * @param context pointer to the function context (of size av_sha_size)\n * @param bits    number of bits in digest (SHA-1 - 160 bits, SHA-2 224 or 256 bits)\n * @return        zero if initialization succeeded, -1 otherwise\n */\nint av_sha_init(struct AVSHA* context, int bits);\n\n/**\n * Update hash value.\n *\n * @param ctx     hash function context\n * @param data    input data to update hash with\n * @param len     input data length\n */\n#if FF_API_CRYPTO_SIZE_T\nvoid av_sha_update(struct AVSHA *ctx, const uint8_t *data, unsigned int len);\n#else\nvoid av_sha_update(struct AVSHA *ctx, const uint8_t *data, size_t len);\n#endif\n\n/**\n * Finish hashing and output digest value.\n *\n * @param context hash function context\n * @param digest  buffer where output digest value is stored\n */\nvoid av_sha_final(struct AVSHA* context, uint8_t *digest);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_SHA_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/sha512.h",
    "content": "/*\n * Copyright (C) 2007 Michael Niedermayer <michaelni@gmx.at>\n * Copyright (C) 2013 James Almer <jamrial@gmail.com>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * @ingroup lavu_sha512\n * Public header for SHA-512 implementation.\n */\n\n#ifndef AVUTIL_SHA512_H\n#define AVUTIL_SHA512_H\n\n#include <stddef.h>\n#include <stdint.h>\n\n#include \"attributes.h\"\n#include \"version.h\"\n\n/**\n * @defgroup lavu_sha512 SHA-512\n * @ingroup lavu_hash\n * SHA-512 (Secure Hash Algorithm) hash function implementations.\n *\n * This module supports the following SHA-2 hash functions:\n *\n * - SHA-512/224: 224 bits\n * - SHA-512/256: 256 bits\n * - SHA-384: 384 bits\n * - SHA-512: 512 bits\n *\n * @see For SHA-1, SHA-256, and variants thereof, see @ref lavu_sha.\n *\n * @{\n */\n\nextern const int av_sha512_size;\n\nstruct AVSHA512;\n\n/**\n * Allocate an AVSHA512 context.\n */\nstruct AVSHA512 *av_sha512_alloc(void);\n\n/**\n * Initialize SHA-2 512 hashing.\n *\n * @param context pointer to the function context (of size av_sha512_size)\n * @param bits    number of bits in digest (224, 256, 384 or 512 bits)\n * @return        zero if initialization succeeded, -1 otherwise\n */\nint av_sha512_init(struct AVSHA512* context, int bits);\n\n/**\n * Update hash value.\n *\n * @param context hash function context\n * @param data    input data to update hash with\n * @param len     input data length\n */\n#if FF_API_CRYPTO_SIZE_T\nvoid av_sha512_update(struct AVSHA512* context, const uint8_t* data, unsigned int len);\n#else\nvoid av_sha512_update(struct AVSHA512* context, const uint8_t* data, size_t len);\n#endif\n\n/**\n * Finish hashing and output digest value.\n *\n * @param context hash function context\n * @param digest  buffer where output digest value is stored\n */\nvoid av_sha512_final(struct AVSHA512* context, uint8_t *digest);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_SHA512_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/spherical.h",
    "content": "/*\n * Copyright (c) 2016 Vittorio Giovara <vittorio.giovara@gmail.com>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * Spherical video\n */\n\n#ifndef AVUTIL_SPHERICAL_H\n#define AVUTIL_SPHERICAL_H\n\n#include <stddef.h>\n#include <stdint.h>\n\n/**\n * @addtogroup lavu_video\n * @{\n *\n * @defgroup lavu_video_spherical Spherical video mapping\n * @{\n */\n\n/**\n * @addtogroup lavu_video_spherical\n * A spherical video file contains surfaces that need to be mapped onto a\n * sphere. Depending on how the frame was converted, a different distortion\n * transformation or surface recomposition function needs to be applied before\n * the video should be mapped and displayed.\n */\n\n/**\n * Projection of the video surface(s) on a sphere.\n */\nenum AVSphericalProjection {\n    /**\n     * Video represents a sphere mapped on a flat surface using\n     * equirectangular projection.\n     */\n    AV_SPHERICAL_EQUIRECTANGULAR,\n\n    /**\n     * Video frame is split into 6 faces of a cube, and arranged on a\n     * 3x2 layout. Faces are oriented upwards for the front, left, right,\n     * and back faces. The up face is oriented so the top of the face is\n     * forwards and the down face is oriented so the top of the face is\n     * to the back.\n     */\n    AV_SPHERICAL_CUBEMAP,\n\n    /**\n     * Video represents a portion of a sphere mapped on a flat surface\n     * using equirectangular projection. The @ref bounding fields indicate\n     * the position of the current video in a larger surface.\n     */\n    AV_SPHERICAL_EQUIRECTANGULAR_TILE,\n};\n\n/**\n * This structure describes how to handle spherical videos, outlining\n * information about projection, initial layout, and any other view modifier.\n *\n * @note The struct must be allocated with av_spherical_alloc() and\n *       its size is not a part of the public ABI.\n */\ntypedef struct AVSphericalMapping {\n    /**\n     * Projection type.\n     */\n    enum AVSphericalProjection projection;\n\n    /**\n     * @name Initial orientation\n     * @{\n     * There fields describe additional rotations applied to the sphere after\n     * the video frame is mapped onto it. The sphere is rotated around the\n     * viewer, who remains stationary. The order of transformation is always\n     * yaw, followed by pitch, and finally by roll.\n     *\n     * The coordinate system matches the one defined in OpenGL, where the\n     * forward vector (z) is coming out of screen, and it is equivalent to\n     * a rotation matrix of R = r_y(yaw) * r_x(pitch) * r_z(roll).\n     *\n     * A positive yaw rotates the portion of the sphere in front of the viewer\n     * toward their right. A positive pitch rotates the portion of the sphere\n     * in front of the viewer upwards. A positive roll tilts the portion of\n     * the sphere in front of the viewer to the viewer's right.\n     *\n     * These values are exported as 16.16 fixed point.\n     *\n     * See this equirectangular projection as example:\n     *\n     * @code{.unparsed}\n     *                   Yaw\n     *     -180           0           180\n     *   90 +-------------+-------------+  180\n     *      |             |             |                  up\n     * P    |             |             |                 y|    forward\n     * i    |             ^             |                  |   /z\n     * t  0 +-------------X-------------+    0 Roll        |  /\n     * c    |             |             |                  | /\n     * h    |             |             |                 0|/_____right\n     *      |             |             |                        x\n     *  -90 +-------------+-------------+ -180\n     *\n     * X - the default camera center\n     * ^ - the default up vector\n     * @endcode\n     */\n    int32_t yaw;   ///< Rotation around the up vector [-180, 180].\n    int32_t pitch; ///< Rotation around the right vector [-90, 90].\n    int32_t roll;  ///< Rotation around the forward vector [-180, 180].\n    /**\n     * @}\n     */\n\n    /**\n     * @name Bounding rectangle\n     * @anchor bounding\n     * @{\n     * These fields indicate the location of the current tile, and where\n     * it should be mapped relative to the original surface. They are\n     * exported as 0.32 fixed point, and can be converted to classic\n     * pixel values with av_spherical_bounds().\n     *\n     * @code{.unparsed}\n     *      +----------------+----------+\n     *      |                |bound_top |\n     *      |            +--------+     |\n     *      | bound_left |tile    |     |\n     *      +<---------->|        |<--->+bound_right\n     *      |            +--------+     |\n     *      |                |          |\n     *      |    bound_bottom|          |\n     *      +----------------+----------+\n     * @endcode\n     *\n     * If needed, the original video surface dimensions can be derived\n     * by adding the current stream or frame size to the related bounds,\n     * like in the following example:\n     *\n     * @code{c}\n     *     original_width  = tile->width  + bound_left + bound_right;\n     *     original_height = tile->height + bound_top  + bound_bottom;\n     * @endcode\n     *\n     * @note These values are valid only for the tiled equirectangular\n     *       projection type (@ref AV_SPHERICAL_EQUIRECTANGULAR_TILE),\n     *       and should be ignored in all other cases.\n     */\n    uint32_t bound_left;   ///< Distance from the left edge\n    uint32_t bound_top;    ///< Distance from the top edge\n    uint32_t bound_right;  ///< Distance from the right edge\n    uint32_t bound_bottom; ///< Distance from the bottom edge\n    /**\n     * @}\n     */\n\n    /**\n     * Number of pixels to pad from the edge of each cube face.\n     *\n     * @note This value is valid for only for the cubemap projection type\n     *       (@ref AV_SPHERICAL_CUBEMAP), and should be ignored in all other\n     *       cases.\n     */\n    uint32_t padding;\n} AVSphericalMapping;\n\n/**\n * Allocate a AVSphericalVideo structure and initialize its fields to default\n * values.\n *\n * @return the newly allocated struct or NULL on failure\n */\nAVSphericalMapping *av_spherical_alloc(size_t *size);\n\n/**\n * Convert the @ref bounding fields from an AVSphericalVideo\n * from 0.32 fixed point to pixels.\n *\n * @param map    The AVSphericalVideo map to read bound values from.\n * @param width  Width of the current frame or stream.\n * @param height Height of the current frame or stream.\n * @param left   Pixels from the left edge.\n * @param top    Pixels from the top edge.\n * @param right  Pixels from the right edge.\n * @param bottom Pixels from the bottom edge.\n */\nvoid av_spherical_tile_bounds(const AVSphericalMapping *map,\n                              size_t width, size_t height,\n                              size_t *left, size_t *top,\n                              size_t *right, size_t *bottom);\n\n/**\n * Provide a human-readable name of a given AVSphericalProjection.\n *\n * @param projection The input AVSphericalProjection.\n *\n * @return The name of the AVSphericalProjection, or \"unknown\".\n */\nconst char *av_spherical_projection_name(enum AVSphericalProjection projection);\n\n/**\n * Get the AVSphericalProjection form a human-readable name.\n *\n * @param name The input string.\n *\n * @return The AVSphericalProjection value, or -1 if not found.\n */\nint av_spherical_from_name(const char *name);\n/**\n * @}\n * @}\n */\n\n#endif /* AVUTIL_SPHERICAL_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/stereo3d.h",
    "content": "/*\n * Copyright (c) 2013 Vittorio Giovara <vittorio.giovara@gmail.com>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * Stereoscopic video\n */\n\n#ifndef AVUTIL_STEREO3D_H\n#define AVUTIL_STEREO3D_H\n\n#include <stdint.h>\n\n#include \"frame.h\"\n\n/**\n * @addtogroup lavu_video\n * @{\n *\n * @defgroup lavu_video_stereo3d Stereo3D types and functions\n * @{\n */\n\n/**\n * @addtogroup lavu_video_stereo3d\n * A stereoscopic video file consists in multiple views embedded in a single\n * frame, usually describing two views of a scene. This file describes all\n * possible codec-independent view arrangements.\n * */\n\n/**\n * List of possible 3D Types\n */\nenum AVStereo3DType {\n    /**\n     * Video is not stereoscopic (and metadata has to be there).\n     */\n    AV_STEREO3D_2D,\n\n    /**\n     * Views are next to each other.\n     *\n     * @code{.unparsed}\n     *    LLLLRRRR\n     *    LLLLRRRR\n     *    LLLLRRRR\n     *    ...\n     * @endcode\n     */\n    AV_STEREO3D_SIDEBYSIDE,\n\n    /**\n     * Views are on top of each other.\n     *\n     * @code{.unparsed}\n     *    LLLLLLLL\n     *    LLLLLLLL\n     *    RRRRRRRR\n     *    RRRRRRRR\n     * @endcode\n     */\n    AV_STEREO3D_TOPBOTTOM,\n\n    /**\n     * Views are alternated temporally.\n     *\n     * @code{.unparsed}\n     *     frame0   frame1   frame2   ...\n     *    LLLLLLLL RRRRRRRR LLLLLLLL\n     *    LLLLLLLL RRRRRRRR LLLLLLLL\n     *    LLLLLLLL RRRRRRRR LLLLLLLL\n     *    ...      ...      ...\n     * @endcode\n     */\n    AV_STEREO3D_FRAMESEQUENCE,\n\n    /**\n     * Views are packed in a checkerboard-like structure per pixel.\n     *\n     * @code{.unparsed}\n     *    LRLRLRLR\n     *    RLRLRLRL\n     *    LRLRLRLR\n     *    ...\n     * @endcode\n     */\n    AV_STEREO3D_CHECKERBOARD,\n\n    /**\n     * Views are next to each other, but when upscaling\n     * apply a checkerboard pattern.\n     *\n     * @code{.unparsed}\n     *     LLLLRRRR          L L L L    R R R R\n     *     LLLLRRRR    =>     L L L L  R R R R\n     *     LLLLRRRR          L L L L    R R R R\n     *     LLLLRRRR           L L L L  R R R R\n     * @endcode\n     */\n    AV_STEREO3D_SIDEBYSIDE_QUINCUNX,\n\n    /**\n     * Views are packed per line, as if interlaced.\n     *\n     * @code{.unparsed}\n     *    LLLLLLLL\n     *    RRRRRRRR\n     *    LLLLLLLL\n     *    ...\n     * @endcode\n     */\n    AV_STEREO3D_LINES,\n\n    /**\n     * Views are packed per column.\n     *\n     * @code{.unparsed}\n     *    LRLRLRLR\n     *    LRLRLRLR\n     *    LRLRLRLR\n     *    ...\n     * @endcode\n     */\n    AV_STEREO3D_COLUMNS,\n};\n\n/**\n * List of possible view types.\n */\nenum AVStereo3DView {\n    /**\n     * Frame contains two packed views.\n     */\n    AV_STEREO3D_VIEW_PACKED,\n\n    /**\n     * Frame contains only the left view.\n     */\n    AV_STEREO3D_VIEW_LEFT,\n\n    /**\n     * Frame contains only the right view.\n     */\n    AV_STEREO3D_VIEW_RIGHT,\n};\n\n/**\n * Inverted views, Right/Bottom represents the left view.\n */\n#define AV_STEREO3D_FLAG_INVERT     (1 << 0)\n\n/**\n * Stereo 3D type: this structure describes how two videos are packed\n * within a single video surface, with additional information as needed.\n *\n * @note The struct must be allocated with av_stereo3d_alloc() and\n *       its size is not a part of the public ABI.\n */\ntypedef struct AVStereo3D {\n    /**\n     * How views are packed within the video.\n     */\n    enum AVStereo3DType type;\n\n    /**\n     * Additional information about the frame packing.\n     */\n    int flags;\n\n    /**\n     * Determines which views are packed.\n     */\n    enum AVStereo3DView view;\n} AVStereo3D;\n\n/**\n * Allocate an AVStereo3D structure and set its fields to default values.\n * The resulting struct can be freed using av_freep().\n *\n * @return An AVStereo3D filled with default values or NULL on failure.\n */\nAVStereo3D *av_stereo3d_alloc(void);\n\n/**\n * Allocate a complete AVFrameSideData and add it to the frame.\n *\n * @param frame The frame which side data is added to.\n *\n * @return The AVStereo3D structure to be filled by caller.\n */\nAVStereo3D *av_stereo3d_create_side_data(AVFrame *frame);\n\n/**\n * Provide a human-readable name of a given stereo3d type.\n *\n * @param type The input stereo3d type value.\n *\n * @return The name of the stereo3d value, or \"unknown\".\n */\nconst char *av_stereo3d_type_name(unsigned int type);\n\n/**\n * Get the AVStereo3DType form a human-readable name.\n *\n * @param name The input string.\n *\n * @return The AVStereo3DType value, or -1 if not found.\n */\nint av_stereo3d_from_name(const char *name);\n\n/**\n * @}\n * @}\n */\n\n#endif /* AVUTIL_STEREO3D_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/tea.h",
    "content": "/*\n * A 32-bit implementation of the TEA algorithm\n * Copyright (c) 2015 Vesselin Bontchev\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_TEA_H\n#define AVUTIL_TEA_H\n\n#include <stdint.h>\n\n/**\n * @file\n * @brief Public header for libavutil TEA algorithm\n * @defgroup lavu_tea TEA\n * @ingroup lavu_crypto\n * @{\n */\n\nextern const int av_tea_size;\n\nstruct AVTEA;\n\n/**\n  * Allocate an AVTEA context\n  * To free the struct: av_free(ptr)\n  */\nstruct AVTEA *av_tea_alloc(void);\n\n/**\n * Initialize an AVTEA context.\n *\n * @param ctx an AVTEA context\n * @param key a key of 16 bytes used for encryption/decryption\n * @param rounds the number of rounds in TEA (64 is the \"standard\")\n */\nvoid av_tea_init(struct AVTEA *ctx, const uint8_t key[16], int rounds);\n\n/**\n * Encrypt or decrypt a buffer using a previously initialized context.\n *\n * @param ctx an AVTEA context\n * @param dst destination array, can be equal to src\n * @param src source array, can be equal to dst\n * @param count number of 8 byte blocks\n * @param iv initialization vector for CBC mode, if NULL then ECB will be used\n * @param decrypt 0 for encryption, 1 for decryption\n */\nvoid av_tea_crypt(struct AVTEA *ctx, uint8_t *dst, const uint8_t *src,\n                  int count, uint8_t *iv, int decrypt);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_TEA_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/threadmessage.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public License\n * as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n * GNU Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public License\n * along with FFmpeg; if not, write to the Free Software Foundation, Inc.,\n * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_THREADMESSAGE_H\n#define AVUTIL_THREADMESSAGE_H\n\ntypedef struct AVThreadMessageQueue AVThreadMessageQueue;\n\ntypedef enum AVThreadMessageFlags {\n\n    /**\n     * Perform non-blocking operation.\n     * If this flag is set, send and recv operations are non-blocking and\n     * return AVERROR(EAGAIN) immediately if they can not proceed.\n     */\n    AV_THREAD_MESSAGE_NONBLOCK = 1,\n\n} AVThreadMessageFlags;\n\n/**\n * Allocate a new message queue.\n *\n * @param mq      pointer to the message queue\n * @param nelem   maximum number of elements in the queue\n * @param elsize  size of each element in the queue\n * @return  >=0 for success; <0 for error, in particular AVERROR(ENOSYS) if\n *          lavu was built without thread support\n */\nint av_thread_message_queue_alloc(AVThreadMessageQueue **mq,\n                                  unsigned nelem,\n                                  unsigned elsize);\n\n/**\n * Free a message queue.\n *\n * The message queue must no longer be in use by another thread.\n */\nvoid av_thread_message_queue_free(AVThreadMessageQueue **mq);\n\n/**\n * Send a message on the queue.\n */\nint av_thread_message_queue_send(AVThreadMessageQueue *mq,\n                                 void *msg,\n                                 unsigned flags);\n\n/**\n * Receive a message from the queue.\n */\nint av_thread_message_queue_recv(AVThreadMessageQueue *mq,\n                                 void *msg,\n                                 unsigned flags);\n\n/**\n * Set the sending error code.\n *\n * If the error code is set to non-zero, av_thread_message_queue_send() will\n * return it immediately. Conventional values, such as AVERROR_EOF or\n * AVERROR(EAGAIN), can be used to cause the sending thread to stop or\n * suspend its operation.\n */\nvoid av_thread_message_queue_set_err_send(AVThreadMessageQueue *mq,\n                                          int err);\n\n/**\n * Set the receiving error code.\n *\n * If the error code is set to non-zero, av_thread_message_queue_recv() will\n * return it immediately when there are no longer available messages.\n * Conventional values, such as AVERROR_EOF or AVERROR(EAGAIN), can be used\n * to cause the receiving thread to stop or suspend its operation.\n */\nvoid av_thread_message_queue_set_err_recv(AVThreadMessageQueue *mq,\n                                          int err);\n\n/**\n * Set the optional free message callback function which will be called if an\n * operation is removing messages from the queue.\n */\nvoid av_thread_message_queue_set_free_func(AVThreadMessageQueue *mq,\n                                           void (*free_func)(void *msg));\n\n/**\n * Flush the message queue\n *\n * This function is mostly equivalent to reading and free-ing every message\n * except that it will be done in a single operation (no lock/unlock between\n * reads).\n */\nvoid av_thread_message_flush(AVThreadMessageQueue *mq);\n\n#endif /* AVUTIL_THREADMESSAGE_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/time.h",
    "content": "/*\n * Copyright (c) 2000-2003 Fabrice Bellard\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_TIME_H\n#define AVUTIL_TIME_H\n\n#include <stdint.h>\n\n/**\n * Get the current time in microseconds.\n */\nint64_t av_gettime(void);\n\n/**\n * Get the current time in microseconds since some unspecified starting point.\n * On platforms that support it, the time comes from a monotonic clock\n * This property makes this time source ideal for measuring relative time.\n * The returned values may not be monotonic on platforms where a monotonic\n * clock is not available.\n */\nint64_t av_gettime_relative(void);\n\n/**\n * Indicates with a boolean result if the av_gettime_relative() time source\n * is monotonic.\n */\nint av_gettime_relative_is_monotonic(void);\n\n/**\n * Sleep for a period of time.  Although the duration is expressed in\n * microseconds, the actual delay may be rounded to the precision of the\n * system timer.\n *\n * @param  usec Number of microseconds to sleep.\n * @return zero on success or (negative) error code.\n */\nint av_usleep(unsigned usec);\n\n#endif /* AVUTIL_TIME_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/timecode.h",
    "content": "/*\n * Copyright (c) 2006 Smartjog S.A.S, Baptiste Coudurier <baptiste.coudurier@gmail.com>\n * Copyright (c) 2011-2012 Smartjog S.A.S, Clément Bœsch <clement.boesch@smartjog.com>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * Timecode helpers header\n */\n\n#ifndef AVUTIL_TIMECODE_H\n#define AVUTIL_TIMECODE_H\n\n#include <stdint.h>\n#include \"rational.h\"\n\n#define AV_TIMECODE_STR_SIZE 23\n\nenum AVTimecodeFlag {\n    AV_TIMECODE_FLAG_DROPFRAME      = 1<<0, ///< timecode is drop frame\n    AV_TIMECODE_FLAG_24HOURSMAX     = 1<<1, ///< timecode wraps after 24 hours\n    AV_TIMECODE_FLAG_ALLOWNEGATIVE  = 1<<2, ///< negative time values are allowed\n};\n\ntypedef struct {\n    int start;          ///< timecode frame start (first base frame number)\n    uint32_t flags;     ///< flags such as drop frame, +24 hours support, ...\n    AVRational rate;    ///< frame rate in rational form\n    unsigned fps;       ///< frame per second; must be consistent with the rate field\n} AVTimecode;\n\n/**\n * Adjust frame number for NTSC drop frame time code.\n *\n * @param framenum frame number to adjust\n * @param fps      frame per second, 30 or 60\n * @return         adjusted frame number\n * @warning        adjustment is only valid in NTSC 29.97 and 59.94\n */\nint av_timecode_adjust_ntsc_framenum2(int framenum, int fps);\n\n/**\n * Convert frame number to SMPTE 12M binary representation.\n *\n * @param tc       timecode data correctly initialized\n * @param framenum frame number\n * @return         the SMPTE binary representation\n *\n * @note Frame number adjustment is automatically done in case of drop timecode,\n *       you do NOT have to call av_timecode_adjust_ntsc_framenum2().\n * @note The frame number is relative to tc->start.\n * @note Color frame (CF), binary group flags (BGF) and biphase mark polarity\n *       correction (PC) bits are set to zero.\n */\nuint32_t av_timecode_get_smpte_from_framenum(const AVTimecode *tc, int framenum);\n\n/**\n * Load timecode string in buf.\n *\n * @param buf      destination buffer, must be at least AV_TIMECODE_STR_SIZE long\n * @param tc       timecode data correctly initialized\n * @param framenum frame number\n * @return         the buf parameter\n *\n * @note Timecode representation can be a negative timecode and have more than\n *       24 hours, but will only be honored if the flags are correctly set.\n * @note The frame number is relative to tc->start.\n */\nchar *av_timecode_make_string(const AVTimecode *tc, char *buf, int framenum);\n\n/**\n * Get the timecode string from the SMPTE timecode format.\n *\n * @param buf        destination buffer, must be at least AV_TIMECODE_STR_SIZE long\n * @param tcsmpte    the 32-bit SMPTE timecode\n * @param prevent_df prevent the use of a drop flag when it is known the DF bit\n *                   is arbitrary\n * @return           the buf parameter\n */\nchar *av_timecode_make_smpte_tc_string(char *buf, uint32_t tcsmpte, int prevent_df);\n\n/**\n * Get the timecode string from the 25-bit timecode format (MPEG GOP format).\n *\n * @param buf     destination buffer, must be at least AV_TIMECODE_STR_SIZE long\n * @param tc25bit the 25-bits timecode\n * @return        the buf parameter\n */\nchar *av_timecode_make_mpeg_tc_string(char *buf, uint32_t tc25bit);\n\n/**\n * Init a timecode struct with the passed parameters.\n *\n * @param log_ctx     a pointer to an arbitrary struct of which the first field\n *                    is a pointer to an AVClass struct (used for av_log)\n * @param tc          pointer to an allocated AVTimecode\n * @param rate        frame rate in rational form\n * @param flags       miscellaneous flags such as drop frame, +24 hours, ...\n *                    (see AVTimecodeFlag)\n * @param frame_start the first frame number\n * @return            0 on success, AVERROR otherwise\n */\nint av_timecode_init(AVTimecode *tc, AVRational rate, int flags, int frame_start, void *log_ctx);\n\n/**\n * Parse timecode representation (hh:mm:ss[:;.]ff).\n *\n * @param log_ctx a pointer to an arbitrary struct of which the first field is a\n *                pointer to an AVClass struct (used for av_log).\n * @param tc      pointer to an allocated AVTimecode\n * @param rate    frame rate in rational form\n * @param str     timecode string which will determine the frame start\n * @return        0 on success, AVERROR otherwise\n */\nint av_timecode_init_from_string(AVTimecode *tc, AVRational rate, const char *str, void *log_ctx);\n\n/**\n * Check if the timecode feature is available for the given frame rate\n *\n * @return 0 if supported, <0 otherwise\n */\nint av_timecode_check_frame_rate(AVRational rate);\n\n#endif /* AVUTIL_TIMECODE_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/timestamp.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * timestamp utils, mostly useful for debugging/logging purposes\n */\n\n#ifndef AVUTIL_TIMESTAMP_H\n#define AVUTIL_TIMESTAMP_H\n\n#include \"common.h\"\n\n#if defined(__cplusplus) && !defined(__STDC_FORMAT_MACROS) && !defined(PRId64)\n#error missing -D__STDC_FORMAT_MACROS / #define __STDC_FORMAT_MACROS\n#endif\n\n#define AV_TS_MAX_STRING_SIZE 32\n\n/**\n * Fill the provided buffer with a string containing a timestamp\n * representation.\n *\n * @param buf a buffer with size in bytes of at least AV_TS_MAX_STRING_SIZE\n * @param ts the timestamp to represent\n * @return the buffer in input\n */\nstatic inline char *av_ts_make_string(char *buf, int64_t ts)\n{\n    if (ts == AV_NOPTS_VALUE) snprintf(buf, AV_TS_MAX_STRING_SIZE, \"NOPTS\");\n    else                      snprintf(buf, AV_TS_MAX_STRING_SIZE, \"%\" PRId64, ts);\n    return buf;\n}\n\n/**\n * Convenience macro, the return value should be used only directly in\n * function arguments but never stand-alone.\n */\n#define av_ts2str(ts) av_ts_make_string((char[AV_TS_MAX_STRING_SIZE]){0}, ts)\n\n/**\n * Fill the provided buffer with a string containing a timestamp time\n * representation.\n *\n * @param buf a buffer with size in bytes of at least AV_TS_MAX_STRING_SIZE\n * @param ts the timestamp to represent\n * @param tb the timebase of the timestamp\n * @return the buffer in input\n */\nstatic inline char *av_ts_make_time_string(char *buf, int64_t ts, AVRational *tb)\n{\n    if (ts == AV_NOPTS_VALUE) snprintf(buf, AV_TS_MAX_STRING_SIZE, \"NOPTS\");\n    else                      snprintf(buf, AV_TS_MAX_STRING_SIZE, \"%.6g\", av_q2d(*tb) * ts);\n    return buf;\n}\n\n/**\n * Convenience macro, the return value should be used only directly in\n * function arguments but never stand-alone.\n */\n#define av_ts2timestr(ts, tb) av_ts_make_time_string((char[AV_TS_MAX_STRING_SIZE]){0}, ts, tb)\n\n#endif /* AVUTIL_TIMESTAMP_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/tree.h",
    "content": "/*\n * copyright (c) 2006 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * A tree container.\n * @author Michael Niedermayer <michaelni@gmx.at>\n */\n\n#ifndef AVUTIL_TREE_H\n#define AVUTIL_TREE_H\n\n#include \"attributes.h\"\n#include \"version.h\"\n\n/**\n * @addtogroup lavu_tree AVTree\n * @ingroup lavu_data\n *\n * Low-complexity tree container\n *\n * Insertion, removal, finding equal, largest which is smaller than and\n * smallest which is larger than, all have O(log n) worst-case complexity.\n * @{\n */\n\n\nstruct AVTreeNode;\nextern const int av_tree_node_size;\n\n/**\n * Allocate an AVTreeNode.\n */\nstruct AVTreeNode *av_tree_node_alloc(void);\n\n/**\n * Find an element.\n * @param root a pointer to the root node of the tree\n * @param next If next is not NULL, then next[0] will contain the previous\n *             element and next[1] the next element. If either does not exist,\n *             then the corresponding entry in next is unchanged.\n * @param cmp compare function used to compare elements in the tree,\n *            API identical to that of Standard C's qsort\n *            It is guaranteed that the first and only the first argument to cmp()\n *            will be the key parameter to av_tree_find(), thus it could if the\n *            user wants, be a different type (like an opaque context).\n * @return An element with cmp(key, elem) == 0 or NULL if no such element\n *         exists in the tree.\n */\nvoid *av_tree_find(const struct AVTreeNode *root, void *key,\n                   int (*cmp)(const void *key, const void *b), void *next[2]);\n\n/**\n * Insert or remove an element.\n *\n * If *next is NULL, then the supplied element will be removed if it exists.\n * If *next is non-NULL, then the supplied element will be inserted, unless\n * it already exists in the tree.\n *\n * @param rootp A pointer to a pointer to the root node of the tree; note that\n *              the root node can change during insertions, this is required\n *              to keep the tree balanced.\n * @param key  pointer to the element key to insert in the tree\n * @param next Used to allocate and free AVTreeNodes. For insertion the user\n *             must set it to an allocated and zeroed object of at least\n *             av_tree_node_size bytes size. av_tree_insert() will set it to\n *             NULL if it has been consumed.\n *             For deleting elements *next is set to NULL by the user and\n *             av_tree_insert() will set it to the AVTreeNode which was\n *             used for the removed element.\n *             This allows the use of flat arrays, which have\n *             lower overhead compared to many malloced elements.\n *             You might want to define a function like:\n *             @code\n *             void *tree_insert(struct AVTreeNode **rootp, void *key,\n *                               int (*cmp)(void *key, const void *b),\n *                               AVTreeNode **next)\n *             {\n *                 if (!*next)\n *                     *next = av_mallocz(av_tree_node_size);\n *                 return av_tree_insert(rootp, key, cmp, next);\n *             }\n *             void *tree_remove(struct AVTreeNode **rootp, void *key,\n *                               int (*cmp)(void *key, const void *b, AVTreeNode **next))\n *             {\n *                 av_freep(next);\n *                 return av_tree_insert(rootp, key, cmp, next);\n *             }\n *             @endcode\n * @param cmp compare function used to compare elements in the tree, API identical\n *            to that of Standard C's qsort\n * @return If no insertion happened, the found element; if an insertion or\n *         removal happened, then either key or NULL will be returned.\n *         Which one it is depends on the tree state and the implementation. You\n *         should make no assumptions that it's one or the other in the code.\n */\nvoid *av_tree_insert(struct AVTreeNode **rootp, void *key,\n                     int (*cmp)(const void *key, const void *b),\n                     struct AVTreeNode **next);\n\nvoid av_tree_destroy(struct AVTreeNode *t);\n\n/**\n * Apply enu(opaque, &elem) to all the elements in the tree in a given range.\n *\n * @param cmp a comparison function that returns < 0 for an element below the\n *            range, > 0 for an element above the range and == 0 for an\n *            element inside the range\n *\n * @note The cmp function should use the same ordering used to construct the\n *       tree.\n */\nvoid av_tree_enumerate(struct AVTreeNode *t, void *opaque,\n                       int (*cmp)(void *opaque, void *elem),\n                       int (*enu)(void *opaque, void *elem));\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_TREE_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/twofish.h",
    "content": "/*\n * An implementation of the TwoFish algorithm\n * Copyright (c) 2015 Supraja Meedinti\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_TWOFISH_H\n#define AVUTIL_TWOFISH_H\n\n#include <stdint.h>\n\n\n/**\n  * @file\n  * @brief Public header for libavutil TWOFISH algorithm\n  * @defgroup lavu_twofish TWOFISH\n  * @ingroup lavu_crypto\n  * @{\n  */\n\nextern const int av_twofish_size;\n\nstruct AVTWOFISH;\n\n/**\n  * Allocate an AVTWOFISH context\n  * To free the struct: av_free(ptr)\n  */\nstruct AVTWOFISH *av_twofish_alloc(void);\n\n/**\n  * Initialize an AVTWOFISH context.\n  *\n  * @param ctx an AVTWOFISH context\n  * @param key a key of size ranging from 1 to 32 bytes used for encryption/decryption\n  * @param key_bits number of keybits: 128, 192, 256 If less than the required, padded with zeroes to nearest valid value; return value is 0 if key_bits is 128/192/256, -1 if less than 0, 1 otherwise\n */\nint av_twofish_init(struct AVTWOFISH *ctx, const uint8_t *key, int key_bits);\n\n/**\n  * Encrypt or decrypt a buffer using a previously initialized context\n  *\n  * @param ctx an AVTWOFISH context\n  * @param dst destination array, can be equal to src\n  * @param src source array, can be equal to dst\n  * @param count number of 16 byte blocks\n  * @paran iv initialization vector for CBC mode, NULL for ECB mode\n  * @param decrypt 0 for encryption, 1 for decryption\n */\nvoid av_twofish_crypt(struct AVTWOFISH *ctx, uint8_t *dst, const uint8_t *src, int count, uint8_t* iv, int decrypt);\n\n/**\n * @}\n */\n#endif /* AVUTIL_TWOFISH_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/version.h",
    "content": "/*\n * copyright (c) 2003 Fabrice Bellard\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n/**\n * @file\n * @ingroup lavu\n * Libavutil version macros\n */\n\n#ifndef AVUTIL_VERSION_H\n#define AVUTIL_VERSION_H\n\n#include \"macros.h\"\n\n/**\n * @addtogroup version_utils\n *\n * Useful to check and match library version in order to maintain\n * backward compatibility.\n *\n * The FFmpeg libraries follow a versioning sheme very similar to\n * Semantic Versioning (http://semver.org/)\n * The difference is that the component called PATCH is called MICRO in FFmpeg\n * and its value is reset to 100 instead of 0 to keep it above or equal to 100.\n * Also we do not increase MICRO for every bugfix or change in git master.\n *\n * Prior to FFmpeg 3.2 point releases did not change any lib version number to\n * avoid aliassing different git master checkouts.\n * Starting with FFmpeg 3.2, the released library versions will occupy\n * a separate MAJOR.MINOR that is not used on the master development branch.\n * That is if we branch a release of master 55.10.123 we will bump to 55.11.100\n * for the release and master will continue at 55.12.100 after it. Each new\n * point release will then bump the MICRO improving the usefulness of the lib\n * versions.\n *\n * @{\n */\n\n#define AV_VERSION_INT(a, b, c) ((a)<<16 | (b)<<8 | (c))\n#define AV_VERSION_DOT(a, b, c) a ##.## b ##.## c\n#define AV_VERSION(a, b, c) AV_VERSION_DOT(a, b, c)\n\n/**\n * Extract version components from the full ::AV_VERSION_INT int as returned\n * by functions like ::avformat_version() and ::avcodec_version()\n */\n#define AV_VERSION_MAJOR(a) ((a) >> 16)\n#define AV_VERSION_MINOR(a) (((a) & 0x00FF00) >> 8)\n#define AV_VERSION_MICRO(a) ((a) & 0xFF)\n\n/**\n * @}\n */\n\n/**\n * @defgroup lavu_ver Version and Build diagnostics\n *\n * Macros and function useful to check at compiletime and at runtime\n * which version of libavutil is in use.\n *\n * @{\n */\n\n#define LIBAVUTIL_VERSION_MAJOR  56\n#define LIBAVUTIL_VERSION_MINOR  14\n#define LIBAVUTIL_VERSION_MICRO 100\n\n#define LIBAVUTIL_VERSION_INT   AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \\\n                                               LIBAVUTIL_VERSION_MINOR, \\\n                                               LIBAVUTIL_VERSION_MICRO)\n#define LIBAVUTIL_VERSION       AV_VERSION(LIBAVUTIL_VERSION_MAJOR,     \\\n                                           LIBAVUTIL_VERSION_MINOR,     \\\n                                           LIBAVUTIL_VERSION_MICRO)\n#define LIBAVUTIL_BUILD         LIBAVUTIL_VERSION_INT\n\n#define LIBAVUTIL_IDENT         \"Lavu\" AV_STRINGIFY(LIBAVUTIL_VERSION)\n\n/**\n * @defgroup lavu_depr_guards Deprecation Guards\n * FF_API_* defines may be placed below to indicate public API that will be\n * dropped at a future version bump. The defines themselves are not part of\n * the public API and may change, break or disappear at any time.\n *\n * @note, when bumping the major version it is recommended to manually\n * disable each FF_API_* in its own commit instead of disabling them all\n * at once through the bump. This improves the git bisect-ability of the change.\n *\n * @{\n */\n\n#ifndef FF_API_VAAPI\n#define FF_API_VAAPI                    (LIBAVUTIL_VERSION_MAJOR < 57)\n#endif\n#ifndef FF_API_FRAME_QP\n#define FF_API_FRAME_QP                 (LIBAVUTIL_VERSION_MAJOR < 57)\n#endif\n#ifndef FF_API_PLUS1_MINUS1\n#define FF_API_PLUS1_MINUS1             (LIBAVUTIL_VERSION_MAJOR < 57)\n#endif\n#ifndef FF_API_ERROR_FRAME\n#define FF_API_ERROR_FRAME              (LIBAVUTIL_VERSION_MAJOR < 57)\n#endif\n#ifndef FF_API_PKT_PTS\n#define FF_API_PKT_PTS                  (LIBAVUTIL_VERSION_MAJOR < 57)\n#endif\n#ifndef FF_API_CRYPTO_SIZE_T\n#define FF_API_CRYPTO_SIZE_T            (LIBAVUTIL_VERSION_MAJOR < 57)\n#endif\n#ifndef FF_API_FRAME_GET_SET\n#define FF_API_FRAME_GET_SET            (LIBAVUTIL_VERSION_MAJOR < 57)\n#endif\n#ifndef FF_API_PSEUDOPAL\n#define FF_API_PSEUDOPAL                (LIBAVUTIL_VERSION_MAJOR < 57)\n#endif\n\n\n/**\n * @}\n * @}\n */\n\n#endif /* AVUTIL_VERSION_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libavutil/xtea.h",
    "content": "/*\n * A 32-bit implementation of the XTEA algorithm\n * Copyright (c) 2012 Samuel Pitoiset\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef AVUTIL_XTEA_H\n#define AVUTIL_XTEA_H\n\n#include <stdint.h>\n\n/**\n * @file\n * @brief Public header for libavutil XTEA algorithm\n * @defgroup lavu_xtea XTEA\n * @ingroup lavu_crypto\n * @{\n */\n\ntypedef struct AVXTEA {\n    uint32_t key[16];\n} AVXTEA;\n\n/**\n * Allocate an AVXTEA context.\n */\nAVXTEA *av_xtea_alloc(void);\n\n/**\n * Initialize an AVXTEA context.\n *\n * @param ctx an AVXTEA context\n * @param key a key of 16 bytes used for encryption/decryption,\n *            interpreted as big endian 32 bit numbers\n */\nvoid av_xtea_init(struct AVXTEA *ctx, const uint8_t key[16]);\n\n/**\n * Initialize an AVXTEA context.\n *\n * @param ctx an AVXTEA context\n * @param key a key of 16 bytes used for encryption/decryption,\n *            interpreted as little endian 32 bit numbers\n */\nvoid av_xtea_le_init(struct AVXTEA *ctx, const uint8_t key[16]);\n\n/**\n * Encrypt or decrypt a buffer using a previously initialized context,\n * in big endian format.\n *\n * @param ctx an AVXTEA context\n * @param dst destination array, can be equal to src\n * @param src source array, can be equal to dst\n * @param count number of 8 byte blocks\n * @param iv initialization vector for CBC mode, if NULL then ECB will be used\n * @param decrypt 0 for encryption, 1 for decryption\n */\nvoid av_xtea_crypt(struct AVXTEA *ctx, uint8_t *dst, const uint8_t *src,\n                   int count, uint8_t *iv, int decrypt);\n\n/**\n * Encrypt or decrypt a buffer using a previously initialized context,\n * in little endian format.\n *\n * @param ctx an AVXTEA context\n * @param dst destination array, can be equal to src\n * @param src source array, can be equal to dst\n * @param count number of 8 byte blocks\n * @param iv initialization vector for CBC mode, if NULL then ECB will be used\n * @param decrypt 0 for encryption, 1 for decryption\n */\nvoid av_xtea_le_crypt(struct AVXTEA *ctx, uint8_t *dst, const uint8_t *src,\n                      int count, uint8_t *iv, int decrypt);\n\n/**\n * @}\n */\n\n#endif /* AVUTIL_XTEA_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libpostproc/postprocess.h",
    "content": "/*\n * Copyright (C) 2001-2003 Michael Niedermayer (michaelni@gmx.at)\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or modify\n * it under the terms of the GNU General Public License as published by\n * the Free Software Foundation; either version 2 of the License, or\n * (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n * GNU General Public License for more details.\n *\n * You should have received a copy of the GNU General Public License\n * along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef POSTPROC_POSTPROCESS_H\n#define POSTPROC_POSTPROCESS_H\n\n/**\n * @file\n * @ingroup lpp\n * external API header\n */\n\n/**\n * @defgroup lpp libpostproc\n * Video postprocessing library.\n *\n * @{\n */\n\n#include \"libpostproc/version.h\"\n\n/**\n * Return the LIBPOSTPROC_VERSION_INT constant.\n */\nunsigned postproc_version(void);\n\n/**\n * Return the libpostproc build-time configuration.\n */\nconst char *postproc_configuration(void);\n\n/**\n * Return the libpostproc license.\n */\nconst char *postproc_license(void);\n\n#define PP_QUALITY_MAX 6\n\n#include <inttypes.h>\n\ntypedef void pp_context;\ntypedef void pp_mode;\n\n#if LIBPOSTPROC_VERSION_INT < (52<<16)\ntypedef pp_context pp_context_t;\ntypedef pp_mode pp_mode_t;\nextern const char *const pp_help; ///< a simple help text\n#else\nextern const char pp_help[]; ///< a simple help text\n#endif\n\nvoid  pp_postprocess(const uint8_t * src[3], const int srcStride[3],\n                     uint8_t * dst[3], const int dstStride[3],\n                     int horizontalSize, int verticalSize,\n                     const int8_t *QP_store,  int QP_stride,\n                     pp_mode *mode, pp_context *ppContext, int pict_type);\n\n\n/**\n * Return a pp_mode or NULL if an error occurred.\n *\n * @param name    the string after \"-pp\" on the command line\n * @param quality a number from 0 to PP_QUALITY_MAX\n */\npp_mode *pp_get_mode_by_name_and_quality(const char *name, int quality);\nvoid pp_free_mode(pp_mode *mode);\n\npp_context *pp_get_context(int width, int height, int flags);\nvoid pp_free_context(pp_context *ppContext);\n\n#define PP_CPU_CAPS_MMX   0x80000000\n#define PP_CPU_CAPS_MMX2  0x20000000\n#define PP_CPU_CAPS_3DNOW 0x40000000\n#define PP_CPU_CAPS_ALTIVEC 0x10000000\n#define PP_CPU_CAPS_AUTO  0x00080000\n\n#define PP_FORMAT         0x00000008\n#define PP_FORMAT_420    (0x00000011|PP_FORMAT)\n#define PP_FORMAT_422    (0x00000001|PP_FORMAT)\n#define PP_FORMAT_411    (0x00000002|PP_FORMAT)\n#define PP_FORMAT_444    (0x00000000|PP_FORMAT)\n#define PP_FORMAT_440    (0x00000010|PP_FORMAT)\n\n#define PP_PICT_TYPE_QP2  0x00000010 ///< MPEG2 style QScale\n\n/**\n * @}\n */\n\n#endif /* POSTPROC_POSTPROCESS_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libpostproc/version.h",
    "content": "/*\n * Version macros.\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef POSTPROC_VERSION_H\n#define POSTPROC_VERSION_H\n\n/**\n * @file\n * Libpostproc version macros\n */\n\n#include \"libavutil/avutil.h\"\n\n#define LIBPOSTPROC_VERSION_MAJOR  55\n#define LIBPOSTPROC_VERSION_MINOR   1\n#define LIBPOSTPROC_VERSION_MICRO 100\n\n#define LIBPOSTPROC_VERSION_INT AV_VERSION_INT(LIBPOSTPROC_VERSION_MAJOR, \\\n                                               LIBPOSTPROC_VERSION_MINOR, \\\n                                               LIBPOSTPROC_VERSION_MICRO)\n#define LIBPOSTPROC_VERSION     AV_VERSION(LIBPOSTPROC_VERSION_MAJOR, \\\n                                           LIBPOSTPROC_VERSION_MINOR, \\\n                                           LIBPOSTPROC_VERSION_MICRO)\n#define LIBPOSTPROC_BUILD       LIBPOSTPROC_VERSION_INT\n\n#define LIBPOSTPROC_IDENT       \"postproc\" AV_STRINGIFY(LIBPOSTPROC_VERSION)\n\n#endif /* POSTPROC_VERSION_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libswresample/swresample.h",
    "content": "/*\n * Copyright (C) 2011-2013 Michael Niedermayer (michaelni@gmx.at)\n *\n * This file is part of libswresample\n *\n * libswresample is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * libswresample is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with libswresample; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef SWRESAMPLE_SWRESAMPLE_H\n#define SWRESAMPLE_SWRESAMPLE_H\n\n/**\n * @file\n * @ingroup lswr\n * libswresample public header\n */\n\n/**\n * @defgroup lswr libswresample\n * @{\n *\n * Audio resampling, sample format conversion and mixing library.\n *\n * Interaction with lswr is done through SwrContext, which is\n * allocated with swr_alloc() or swr_alloc_set_opts(). It is opaque, so all parameters\n * must be set with the @ref avoptions API.\n *\n * The first thing you will need to do in order to use lswr is to allocate\n * SwrContext. This can be done with swr_alloc() or swr_alloc_set_opts(). If you\n * are using the former, you must set options through the @ref avoptions API.\n * The latter function provides the same feature, but it allows you to set some\n * common options in the same statement.\n *\n * For example the following code will setup conversion from planar float sample\n * format to interleaved signed 16-bit integer, downsampling from 48kHz to\n * 44.1kHz and downmixing from 5.1 channels to stereo (using the default mixing\n * matrix). This is using the swr_alloc() function.\n * @code\n * SwrContext *swr = swr_alloc();\n * av_opt_set_channel_layout(swr, \"in_channel_layout\",  AV_CH_LAYOUT_5POINT1, 0);\n * av_opt_set_channel_layout(swr, \"out_channel_layout\", AV_CH_LAYOUT_STEREO,  0);\n * av_opt_set_int(swr, \"in_sample_rate\",     48000,                0);\n * av_opt_set_int(swr, \"out_sample_rate\",    44100,                0);\n * av_opt_set_sample_fmt(swr, \"in_sample_fmt\",  AV_SAMPLE_FMT_FLTP, 0);\n * av_opt_set_sample_fmt(swr, \"out_sample_fmt\", AV_SAMPLE_FMT_S16,  0);\n * @endcode\n *\n * The same job can be done using swr_alloc_set_opts() as well:\n * @code\n * SwrContext *swr = swr_alloc_set_opts(NULL,  // we're allocating a new context\n *                       AV_CH_LAYOUT_STEREO,  // out_ch_layout\n *                       AV_SAMPLE_FMT_S16,    // out_sample_fmt\n *                       44100,                // out_sample_rate\n *                       AV_CH_LAYOUT_5POINT1, // in_ch_layout\n *                       AV_SAMPLE_FMT_FLTP,   // in_sample_fmt\n *                       48000,                // in_sample_rate\n *                       0,                    // log_offset\n *                       NULL);                // log_ctx\n * @endcode\n *\n * Once all values have been set, it must be initialized with swr_init(). If\n * you need to change the conversion parameters, you can change the parameters\n * using @ref AVOptions, as described above in the first example; or by using\n * swr_alloc_set_opts(), but with the first argument the allocated context.\n * You must then call swr_init() again.\n *\n * The conversion itself is done by repeatedly calling swr_convert().\n * Note that the samples may get buffered in swr if you provide insufficient\n * output space or if sample rate conversion is done, which requires \"future\"\n * samples. Samples that do not require future input can be retrieved at any\n * time by using swr_convert() (in_count can be set to 0).\n * At the end of conversion the resampling buffer can be flushed by calling\n * swr_convert() with NULL in and 0 in_count.\n *\n * The samples used in the conversion process can be managed with the libavutil\n * @ref lavu_sampmanip \"samples manipulation\" API, including av_samples_alloc()\n * function used in the following example.\n *\n * The delay between input and output, can at any time be found by using\n * swr_get_delay().\n *\n * The following code demonstrates the conversion loop assuming the parameters\n * from above and caller-defined functions get_input() and handle_output():\n * @code\n * uint8_t **input;\n * int in_samples;\n *\n * while (get_input(&input, &in_samples)) {\n *     uint8_t *output;\n *     int out_samples = av_rescale_rnd(swr_get_delay(swr, 48000) +\n *                                      in_samples, 44100, 48000, AV_ROUND_UP);\n *     av_samples_alloc(&output, NULL, 2, out_samples,\n *                      AV_SAMPLE_FMT_S16, 0);\n *     out_samples = swr_convert(swr, &output, out_samples,\n *                                      input, in_samples);\n *     handle_output(output, out_samples);\n *     av_freep(&output);\n * }\n * @endcode\n *\n * When the conversion is finished, the conversion\n * context and everything associated with it must be freed with swr_free().\n * A swr_close() function is also available, but it exists mainly for\n * compatibility with libavresample, and is not required to be called.\n *\n * There will be no memory leak if the data is not completely flushed before\n * swr_free().\n */\n\n#include <stdint.h>\n#include \"libavutil/channel_layout.h\"\n#include \"libavutil/frame.h\"\n#include \"libavutil/samplefmt.h\"\n\n#include \"libswresample/version.h\"\n\n/**\n * @name Option constants\n * These constants are used for the @ref avoptions interface for lswr.\n * @{\n *\n */\n\n#define SWR_FLAG_RESAMPLE 1 ///< Force resampling even if equal sample rate\n//TODO use int resample ?\n//long term TODO can we enable this dynamically?\n\n/** Dithering algorithms */\nenum SwrDitherType {\n    SWR_DITHER_NONE = 0,\n    SWR_DITHER_RECTANGULAR,\n    SWR_DITHER_TRIANGULAR,\n    SWR_DITHER_TRIANGULAR_HIGHPASS,\n\n    SWR_DITHER_NS = 64,         ///< not part of API/ABI\n    SWR_DITHER_NS_LIPSHITZ,\n    SWR_DITHER_NS_F_WEIGHTED,\n    SWR_DITHER_NS_MODIFIED_E_WEIGHTED,\n    SWR_DITHER_NS_IMPROVED_E_WEIGHTED,\n    SWR_DITHER_NS_SHIBATA,\n    SWR_DITHER_NS_LOW_SHIBATA,\n    SWR_DITHER_NS_HIGH_SHIBATA,\n    SWR_DITHER_NB,              ///< not part of API/ABI\n};\n\n/** Resampling Engines */\nenum SwrEngine {\n    SWR_ENGINE_SWR,             /**< SW Resampler */\n    SWR_ENGINE_SOXR,            /**< SoX Resampler */\n    SWR_ENGINE_NB,              ///< not part of API/ABI\n};\n\n/** Resampling Filter Types */\nenum SwrFilterType {\n    SWR_FILTER_TYPE_CUBIC,              /**< Cubic */\n    SWR_FILTER_TYPE_BLACKMAN_NUTTALL,   /**< Blackman Nuttall windowed sinc */\n    SWR_FILTER_TYPE_KAISER,             /**< Kaiser windowed sinc */\n};\n\n/**\n * @}\n */\n\n/**\n * The libswresample context. Unlike libavcodec and libavformat, this structure\n * is opaque. This means that if you would like to set options, you must use\n * the @ref avoptions API and cannot directly set values to members of the\n * structure.\n */\ntypedef struct SwrContext SwrContext;\n\n/**\n * Get the AVClass for SwrContext. It can be used in combination with\n * AV_OPT_SEARCH_FAKE_OBJ for examining options.\n *\n * @see av_opt_find().\n * @return the AVClass of SwrContext\n */\nconst AVClass *swr_get_class(void);\n\n/**\n * @name SwrContext constructor functions\n * @{\n */\n\n/**\n * Allocate SwrContext.\n *\n * If you use this function you will need to set the parameters (manually or\n * with swr_alloc_set_opts()) before calling swr_init().\n *\n * @see swr_alloc_set_opts(), swr_init(), swr_free()\n * @return NULL on error, allocated context otherwise\n */\nstruct SwrContext *swr_alloc(void);\n\n/**\n * Initialize context after user parameters have been set.\n * @note The context must be configured using the AVOption API.\n *\n * @see av_opt_set_int()\n * @see av_opt_set_dict()\n *\n * @param[in,out]   s Swr context to initialize\n * @return AVERROR error code in case of failure.\n */\nint swr_init(struct SwrContext *s);\n\n/**\n * Check whether an swr context has been initialized or not.\n *\n * @param[in]       s Swr context to check\n * @see swr_init()\n * @return positive if it has been initialized, 0 if not initialized\n */\nint swr_is_initialized(struct SwrContext *s);\n\n/**\n * Allocate SwrContext if needed and set/reset common parameters.\n *\n * This function does not require s to be allocated with swr_alloc(). On the\n * other hand, swr_alloc() can use swr_alloc_set_opts() to set the parameters\n * on the allocated context.\n *\n * @param s               existing Swr context if available, or NULL if not\n * @param out_ch_layout   output channel layout (AV_CH_LAYOUT_*)\n * @param out_sample_fmt  output sample format (AV_SAMPLE_FMT_*).\n * @param out_sample_rate output sample rate (frequency in Hz)\n * @param in_ch_layout    input channel layout (AV_CH_LAYOUT_*)\n * @param in_sample_fmt   input sample format (AV_SAMPLE_FMT_*).\n * @param in_sample_rate  input sample rate (frequency in Hz)\n * @param log_offset      logging level offset\n * @param log_ctx         parent logging context, can be NULL\n *\n * @see swr_init(), swr_free()\n * @return NULL on error, allocated context otherwise\n */\nstruct SwrContext *swr_alloc_set_opts(struct SwrContext *s,\n                                      int64_t out_ch_layout, enum AVSampleFormat out_sample_fmt, int out_sample_rate,\n                                      int64_t  in_ch_layout, enum AVSampleFormat  in_sample_fmt, int  in_sample_rate,\n                                      int log_offset, void *log_ctx);\n\n/**\n * @}\n *\n * @name SwrContext destructor functions\n * @{\n */\n\n/**\n * Free the given SwrContext and set the pointer to NULL.\n *\n * @param[in] s a pointer to a pointer to Swr context\n */\nvoid swr_free(struct SwrContext **s);\n\n/**\n * Closes the context so that swr_is_initialized() returns 0.\n *\n * The context can be brought back to life by running swr_init(),\n * swr_init() can also be used without swr_close().\n * This function is mainly provided for simplifying the usecase\n * where one tries to support libavresample and libswresample.\n *\n * @param[in,out] s Swr context to be closed\n */\nvoid swr_close(struct SwrContext *s);\n\n/**\n * @}\n *\n * @name Core conversion functions\n * @{\n */\n\n/** Convert audio.\n *\n * in and in_count can be set to 0 to flush the last few samples out at the\n * end.\n *\n * If more input is provided than output space, then the input will be buffered.\n * You can avoid this buffering by using swr_get_out_samples() to retrieve an\n * upper bound on the required number of output samples for the given number of\n * input samples. Conversion will run directly without copying whenever possible.\n *\n * @param s         allocated Swr context, with parameters set\n * @param out       output buffers, only the first one need be set in case of packed audio\n * @param out_count amount of space available for output in samples per channel\n * @param in        input buffers, only the first one need to be set in case of packed audio\n * @param in_count  number of input samples available in one channel\n *\n * @return number of samples output per channel, negative value on error\n */\nint swr_convert(struct SwrContext *s, uint8_t **out, int out_count,\n                                const uint8_t **in , int in_count);\n\n/**\n * Convert the next timestamp from input to output\n * timestamps are in 1/(in_sample_rate * out_sample_rate) units.\n *\n * @note There are 2 slightly differently behaving modes.\n *       @li When automatic timestamp compensation is not used, (min_compensation >= FLT_MAX)\n *              in this case timestamps will be passed through with delays compensated\n *       @li When automatic timestamp compensation is used, (min_compensation < FLT_MAX)\n *              in this case the output timestamps will match output sample numbers.\n *              See ffmpeg-resampler(1) for the two modes of compensation.\n *\n * @param s[in]     initialized Swr context\n * @param pts[in]   timestamp for the next input sample, INT64_MIN if unknown\n * @see swr_set_compensation(), swr_drop_output(), and swr_inject_silence() are\n *      function used internally for timestamp compensation.\n * @return the output timestamp for the next output sample\n */\nint64_t swr_next_pts(struct SwrContext *s, int64_t pts);\n\n/**\n * @}\n *\n * @name Low-level option setting functions\n * These functons provide a means to set low-level options that is not possible\n * with the AVOption API.\n * @{\n */\n\n/**\n * Activate resampling compensation (\"soft\" compensation). This function is\n * internally called when needed in swr_next_pts().\n *\n * @param[in,out] s             allocated Swr context. If it is not initialized,\n *                              or SWR_FLAG_RESAMPLE is not set, swr_init() is\n *                              called with the flag set.\n * @param[in]     sample_delta  delta in PTS per sample\n * @param[in]     compensation_distance number of samples to compensate for\n * @return    >= 0 on success, AVERROR error codes if:\n *            @li @c s is NULL,\n *            @li @c compensation_distance is less than 0,\n *            @li @c compensation_distance is 0 but sample_delta is not,\n *            @li compensation unsupported by resampler, or\n *            @li swr_init() fails when called.\n */\nint swr_set_compensation(struct SwrContext *s, int sample_delta, int compensation_distance);\n\n/**\n * Set a customized input channel mapping.\n *\n * @param[in,out] s           allocated Swr context, not yet initialized\n * @param[in]     channel_map customized input channel mapping (array of channel\n *                            indexes, -1 for a muted channel)\n * @return >= 0 on success, or AVERROR error code in case of failure.\n */\nint swr_set_channel_mapping(struct SwrContext *s, const int *channel_map);\n\n/**\n * Generate a channel mixing matrix.\n *\n * This function is the one used internally by libswresample for building the\n * default mixing matrix. It is made public just as a utility function for\n * building custom matrices.\n *\n * @param in_layout           input channel layout\n * @param out_layout          output channel layout\n * @param center_mix_level    mix level for the center channel\n * @param surround_mix_level  mix level for the surround channel(s)\n * @param lfe_mix_level       mix level for the low-frequency effects channel\n * @param rematrix_maxval     if 1.0, coefficients will be normalized to prevent\n *                            overflow. if INT_MAX, coefficients will not be\n *                            normalized.\n * @param[out] matrix         mixing coefficients; matrix[i + stride * o] is\n *                            the weight of input channel i in output channel o.\n * @param stride              distance between adjacent input channels in the\n *                            matrix array\n * @param matrix_encoding     matrixed stereo downmix mode (e.g. dplii)\n * @param log_ctx             parent logging context, can be NULL\n * @return                    0 on success, negative AVERROR code on failure\n */\nint swr_build_matrix(uint64_t in_layout, uint64_t out_layout,\n                     double center_mix_level, double surround_mix_level,\n                     double lfe_mix_level, double rematrix_maxval,\n                     double rematrix_volume, double *matrix,\n                     int stride, enum AVMatrixEncoding matrix_encoding,\n                     void *log_ctx);\n\n/**\n * Set a customized remix matrix.\n *\n * @param s       allocated Swr context, not yet initialized\n * @param matrix  remix coefficients; matrix[i + stride * o] is\n *                the weight of input channel i in output channel o\n * @param stride  offset between lines of the matrix\n * @return  >= 0 on success, or AVERROR error code in case of failure.\n */\nint swr_set_matrix(struct SwrContext *s, const double *matrix, int stride);\n\n/**\n * @}\n *\n * @name Sample handling functions\n * @{\n */\n\n/**\n * Drops the specified number of output samples.\n *\n * This function, along with swr_inject_silence(), is called by swr_next_pts()\n * if needed for \"hard\" compensation.\n *\n * @param s     allocated Swr context\n * @param count number of samples to be dropped\n *\n * @return >= 0 on success, or a negative AVERROR code on failure\n */\nint swr_drop_output(struct SwrContext *s, int count);\n\n/**\n * Injects the specified number of silence samples.\n *\n * This function, along with swr_drop_output(), is called by swr_next_pts()\n * if needed for \"hard\" compensation.\n *\n * @param s     allocated Swr context\n * @param count number of samples to be dropped\n *\n * @return >= 0 on success, or a negative AVERROR code on failure\n */\nint swr_inject_silence(struct SwrContext *s, int count);\n\n/**\n * Gets the delay the next input sample will experience relative to the next output sample.\n *\n * Swresample can buffer data if more input has been provided than available\n * output space, also converting between sample rates needs a delay.\n * This function returns the sum of all such delays.\n * The exact delay is not necessarily an integer value in either input or\n * output sample rate. Especially when downsampling by a large value, the\n * output sample rate may be a poor choice to represent the delay, similarly\n * for upsampling and the input sample rate.\n *\n * @param s     swr context\n * @param base  timebase in which the returned delay will be:\n *              @li if it's set to 1 the returned delay is in seconds\n *              @li if it's set to 1000 the returned delay is in milliseconds\n *              @li if it's set to the input sample rate then the returned\n *                  delay is in input samples\n *              @li if it's set to the output sample rate then the returned\n *                  delay is in output samples\n *              @li if it's the least common multiple of in_sample_rate and\n *                  out_sample_rate then an exact rounding-free delay will be\n *                  returned\n * @returns     the delay in 1 / @c base units.\n */\nint64_t swr_get_delay(struct SwrContext *s, int64_t base);\n\n/**\n * Find an upper bound on the number of samples that the next swr_convert\n * call will output, if called with in_samples of input samples. This\n * depends on the internal state, and anything changing the internal state\n * (like further swr_convert() calls) will may change the number of samples\n * swr_get_out_samples() returns for the same number of input samples.\n *\n * @param in_samples    number of input samples.\n * @note any call to swr_inject_silence(), swr_convert(), swr_next_pts()\n *       or swr_set_compensation() invalidates this limit\n * @note it is recommended to pass the correct available buffer size\n *       to all functions like swr_convert() even if swr_get_out_samples()\n *       indicates that less would be used.\n * @returns an upper bound on the number of samples that the next swr_convert\n *          will output or a negative value to indicate an error\n */\nint swr_get_out_samples(struct SwrContext *s, int in_samples);\n\n/**\n * @}\n *\n * @name Configuration accessors\n * @{\n */\n\n/**\n * Return the @ref LIBSWRESAMPLE_VERSION_INT constant.\n *\n * This is useful to check if the build-time libswresample has the same version\n * as the run-time one.\n *\n * @returns     the unsigned int-typed version\n */\nunsigned swresample_version(void);\n\n/**\n * Return the swr build-time configuration.\n *\n * @returns     the build-time @c ./configure flags\n */\nconst char *swresample_configuration(void);\n\n/**\n * Return the swr license.\n *\n * @returns     the license of libswresample, determined at build-time\n */\nconst char *swresample_license(void);\n\n/**\n * @}\n *\n * @name AVFrame based API\n * @{\n */\n\n/**\n * Convert the samples in the input AVFrame and write them to the output AVFrame.\n *\n * Input and output AVFrames must have channel_layout, sample_rate and format set.\n *\n * If the output AVFrame does not have the data pointers allocated the nb_samples\n * field will be set using av_frame_get_buffer()\n * is called to allocate the frame.\n *\n * The output AVFrame can be NULL or have fewer allocated samples than required.\n * In this case, any remaining samples not written to the output will be added\n * to an internal FIFO buffer, to be returned at the next call to this function\n * or to swr_convert().\n *\n * If converting sample rate, there may be data remaining in the internal\n * resampling delay buffer. swr_get_delay() tells the number of\n * remaining samples. To get this data as output, call this function or\n * swr_convert() with NULL input.\n *\n * If the SwrContext configuration does not match the output and\n * input AVFrame settings the conversion does not take place and depending on\n * which AVFrame is not matching AVERROR_OUTPUT_CHANGED, AVERROR_INPUT_CHANGED\n * or the result of a bitwise-OR of them is returned.\n *\n * @see swr_delay()\n * @see swr_convert()\n * @see swr_get_delay()\n *\n * @param swr             audio resample context\n * @param output          output AVFrame\n * @param input           input AVFrame\n * @return                0 on success, AVERROR on failure or nonmatching\n *                        configuration.\n */\nint swr_convert_frame(SwrContext *swr,\n                      AVFrame *output, const AVFrame *input);\n\n/**\n * Configure or reconfigure the SwrContext using the information\n * provided by the AVFrames.\n *\n * The original resampling context is reset even on failure.\n * The function calls swr_close() internally if the context is open.\n *\n * @see swr_close();\n *\n * @param swr             audio resample context\n * @param output          output AVFrame\n * @param input           input AVFrame\n * @return                0 on success, AVERROR on failure.\n */\nint swr_config_frame(SwrContext *swr, const AVFrame *out, const AVFrame *in);\n\n/**\n * @}\n * @}\n */\n\n#endif /* SWRESAMPLE_SWRESAMPLE_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libswresample/version.h",
    "content": "/*\n * Version macros.\n *\n * This file is part of libswresample\n *\n * libswresample is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * libswresample is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with libswresample; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef SWRESAMPLE_VERSION_H\n#define SWRESAMPLE_VERSION_H\n\n/**\n * @file\n * Libswresample version macros\n */\n\n#include \"libavutil/avutil.h\"\n\n#define LIBSWRESAMPLE_VERSION_MAJOR   3\n#define LIBSWRESAMPLE_VERSION_MINOR   1\n#define LIBSWRESAMPLE_VERSION_MICRO 100\n\n#define LIBSWRESAMPLE_VERSION_INT  AV_VERSION_INT(LIBSWRESAMPLE_VERSION_MAJOR, \\\n                                                  LIBSWRESAMPLE_VERSION_MINOR, \\\n                                                  LIBSWRESAMPLE_VERSION_MICRO)\n#define LIBSWRESAMPLE_VERSION      AV_VERSION(LIBSWRESAMPLE_VERSION_MAJOR, \\\n                                              LIBSWRESAMPLE_VERSION_MINOR, \\\n                                              LIBSWRESAMPLE_VERSION_MICRO)\n#define LIBSWRESAMPLE_BUILD        LIBSWRESAMPLE_VERSION_INT\n\n#define LIBSWRESAMPLE_IDENT        \"SwR\" AV_STRINGIFY(LIBSWRESAMPLE_VERSION)\n\n#endif /* SWRESAMPLE_VERSION_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libswscale/swscale.h",
    "content": "/*\n * Copyright (C) 2001-2011 Michael Niedermayer <michaelni@gmx.at>\n *\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef SWSCALE_SWSCALE_H\n#define SWSCALE_SWSCALE_H\n\n/**\n * @file\n * @ingroup libsws\n * external API header\n */\n\n#include <stdint.h>\n\n#include \"libavutil/avutil.h\"\n#include \"libavutil/log.h\"\n#include \"libavutil/pixfmt.h\"\n#include \"version.h\"\n\n/**\n * @defgroup libsws libswscale\n * Color conversion and scaling library.\n *\n * @{\n *\n * Return the LIBSWSCALE_VERSION_INT constant.\n */\nunsigned swscale_version(void);\n\n/**\n * Return the libswscale build-time configuration.\n */\nconst char *swscale_configuration(void);\n\n/**\n * Return the libswscale license.\n */\nconst char *swscale_license(void);\n\n/* values for the flags, the stuff on the command line is different */\n#define SWS_FAST_BILINEAR     1\n#define SWS_BILINEAR          2\n#define SWS_BICUBIC           4\n#define SWS_X                 8\n#define SWS_POINT          0x10\n#define SWS_AREA           0x20\n#define SWS_BICUBLIN       0x40\n#define SWS_GAUSS          0x80\n#define SWS_SINC          0x100\n#define SWS_LANCZOS       0x200\n#define SWS_SPLINE        0x400\n\n#define SWS_SRC_V_CHR_DROP_MASK     0x30000\n#define SWS_SRC_V_CHR_DROP_SHIFT    16\n\n#define SWS_PARAM_DEFAULT           123456\n\n#define SWS_PRINT_INFO              0x1000\n\n//the following 3 flags are not completely implemented\n//internal chrominance subsampling info\n#define SWS_FULL_CHR_H_INT    0x2000\n//input subsampling info\n#define SWS_FULL_CHR_H_INP    0x4000\n#define SWS_DIRECT_BGR        0x8000\n#define SWS_ACCURATE_RND      0x40000\n#define SWS_BITEXACT          0x80000\n#define SWS_ERROR_DIFFUSION  0x800000\n\n#define SWS_MAX_REDUCE_CUTOFF 0.002\n\n#define SWS_CS_ITU709         1\n#define SWS_CS_FCC            4\n#define SWS_CS_ITU601         5\n#define SWS_CS_ITU624         5\n#define SWS_CS_SMPTE170M      5\n#define SWS_CS_SMPTE240M      7\n#define SWS_CS_DEFAULT        5\n#define SWS_CS_BT2020         9\n\n/**\n * Return a pointer to yuv<->rgb coefficients for the given colorspace\n * suitable for sws_setColorspaceDetails().\n *\n * @param colorspace One of the SWS_CS_* macros. If invalid,\n * SWS_CS_DEFAULT is used.\n */\nconst int *sws_getCoefficients(int colorspace);\n\n// when used for filters they must have an odd number of elements\n// coeffs cannot be shared between vectors\ntypedef struct SwsVector {\n    double *coeff;              ///< pointer to the list of coefficients\n    int length;                 ///< number of coefficients in the vector\n} SwsVector;\n\n// vectors can be shared\ntypedef struct SwsFilter {\n    SwsVector *lumH;\n    SwsVector *lumV;\n    SwsVector *chrH;\n    SwsVector *chrV;\n} SwsFilter;\n\nstruct SwsContext;\n\n/**\n * Return a positive value if pix_fmt is a supported input format, 0\n * otherwise.\n */\nint sws_isSupportedInput(enum AVPixelFormat pix_fmt);\n\n/**\n * Return a positive value if pix_fmt is a supported output format, 0\n * otherwise.\n */\nint sws_isSupportedOutput(enum AVPixelFormat pix_fmt);\n\n/**\n * @param[in]  pix_fmt the pixel format\n * @return a positive value if an endianness conversion for pix_fmt is\n * supported, 0 otherwise.\n */\nint sws_isSupportedEndiannessConversion(enum AVPixelFormat pix_fmt);\n\n/**\n * Allocate an empty SwsContext. This must be filled and passed to\n * sws_init_context(). For filling see AVOptions, options.c and\n * sws_setColorspaceDetails().\n */\nstruct SwsContext *sws_alloc_context(void);\n\n/**\n * Initialize the swscaler context sws_context.\n *\n * @return zero or positive value on success, a negative value on\n * error\n */\nav_warn_unused_result\nint sws_init_context(struct SwsContext *sws_context, SwsFilter *srcFilter, SwsFilter *dstFilter);\n\n/**\n * Free the swscaler context swsContext.\n * If swsContext is NULL, then does nothing.\n */\nvoid sws_freeContext(struct SwsContext *swsContext);\n\n/**\n * Allocate and return an SwsContext. You need it to perform\n * scaling/conversion operations using sws_scale().\n *\n * @param srcW the width of the source image\n * @param srcH the height of the source image\n * @param srcFormat the source image format\n * @param dstW the width of the destination image\n * @param dstH the height of the destination image\n * @param dstFormat the destination image format\n * @param flags specify which algorithm and options to use for rescaling\n * @param param extra parameters to tune the used scaler\n *              For SWS_BICUBIC param[0] and [1] tune the shape of the basis\n *              function, param[0] tunes f(1) and param[1] f´(1)\n *              For SWS_GAUSS param[0] tunes the exponent and thus cutoff\n *              frequency\n *              For SWS_LANCZOS param[0] tunes the width of the window function\n * @return a pointer to an allocated context, or NULL in case of error\n * @note this function is to be removed after a saner alternative is\n *       written\n */\nstruct SwsContext *sws_getContext(int srcW, int srcH, enum AVPixelFormat srcFormat,\n                                  int dstW, int dstH, enum AVPixelFormat dstFormat,\n                                  int flags, SwsFilter *srcFilter,\n                                  SwsFilter *dstFilter, const double *param);\n\n/**\n * Scale the image slice in srcSlice and put the resulting scaled\n * slice in the image in dst. A slice is a sequence of consecutive\n * rows in an image.\n *\n * Slices have to be provided in sequential order, either in\n * top-bottom or bottom-top order. If slices are provided in\n * non-sequential order the behavior of the function is undefined.\n *\n * @param c         the scaling context previously created with\n *                  sws_getContext()\n * @param srcSlice  the array containing the pointers to the planes of\n *                  the source slice\n * @param srcStride the array containing the strides for each plane of\n *                  the source image\n * @param srcSliceY the position in the source image of the slice to\n *                  process, that is the number (counted starting from\n *                  zero) in the image of the first row of the slice\n * @param srcSliceH the height of the source slice, that is the number\n *                  of rows in the slice\n * @param dst       the array containing the pointers to the planes of\n *                  the destination image\n * @param dstStride the array containing the strides for each plane of\n *                  the destination image\n * @return          the height of the output slice\n */\nint sws_scale(struct SwsContext *c, const uint8_t *const srcSlice[],\n              const int srcStride[], int srcSliceY, int srcSliceH,\n              uint8_t *const dst[], const int dstStride[]);\n\n/**\n * @param dstRange flag indicating the while-black range of the output (1=jpeg / 0=mpeg)\n * @param srcRange flag indicating the while-black range of the input (1=jpeg / 0=mpeg)\n * @param table the yuv2rgb coefficients describing the output yuv space, normally ff_yuv2rgb_coeffs[x]\n * @param inv_table the yuv2rgb coefficients describing the input yuv space, normally ff_yuv2rgb_coeffs[x]\n * @param brightness 16.16 fixed point brightness correction\n * @param contrast 16.16 fixed point contrast correction\n * @param saturation 16.16 fixed point saturation correction\n * @return -1 if not supported\n */\nint sws_setColorspaceDetails(struct SwsContext *c, const int inv_table[4],\n                             int srcRange, const int table[4], int dstRange,\n                             int brightness, int contrast, int saturation);\n\n/**\n * @return -1 if not supported\n */\nint sws_getColorspaceDetails(struct SwsContext *c, int **inv_table,\n                             int *srcRange, int **table, int *dstRange,\n                             int *brightness, int *contrast, int *saturation);\n\n/**\n * Allocate and return an uninitialized vector with length coefficients.\n */\nSwsVector *sws_allocVec(int length);\n\n/**\n * Return a normalized Gaussian curve used to filter stuff\n * quality = 3 is high quality, lower is lower quality.\n */\nSwsVector *sws_getGaussianVec(double variance, double quality);\n\n/**\n * Scale all the coefficients of a by the scalar value.\n */\nvoid sws_scaleVec(SwsVector *a, double scalar);\n\n/**\n * Scale all the coefficients of a so that their sum equals height.\n */\nvoid sws_normalizeVec(SwsVector *a, double height);\n\n#if FF_API_SWS_VECTOR\nattribute_deprecated SwsVector *sws_getConstVec(double c, int length);\nattribute_deprecated SwsVector *sws_getIdentityVec(void);\nattribute_deprecated void sws_convVec(SwsVector *a, SwsVector *b);\nattribute_deprecated void sws_addVec(SwsVector *a, SwsVector *b);\nattribute_deprecated void sws_subVec(SwsVector *a, SwsVector *b);\nattribute_deprecated void sws_shiftVec(SwsVector *a, int shift);\nattribute_deprecated SwsVector *sws_cloneVec(SwsVector *a);\nattribute_deprecated void sws_printVec2(SwsVector *a, AVClass *log_ctx, int log_level);\n#endif\n\nvoid sws_freeVec(SwsVector *a);\n\nSwsFilter *sws_getDefaultFilter(float lumaGBlur, float chromaGBlur,\n                                float lumaSharpen, float chromaSharpen,\n                                float chromaHShift, float chromaVShift,\n                                int verbose);\nvoid sws_freeFilter(SwsFilter *filter);\n\n/**\n * Check if context can be reused, otherwise reallocate a new one.\n *\n * If context is NULL, just calls sws_getContext() to get a new\n * context. Otherwise, checks if the parameters are the ones already\n * saved in context. If that is the case, returns the current\n * context. Otherwise, frees context and gets a new context with\n * the new parameters.\n *\n * Be warned that srcFilter and dstFilter are not checked, they\n * are assumed to remain the same.\n */\nstruct SwsContext *sws_getCachedContext(struct SwsContext *context,\n                                        int srcW, int srcH, enum AVPixelFormat srcFormat,\n                                        int dstW, int dstH, enum AVPixelFormat dstFormat,\n                                        int flags, SwsFilter *srcFilter,\n                                        SwsFilter *dstFilter, const double *param);\n\n/**\n * Convert an 8-bit paletted frame into a frame with a color depth of 32 bits.\n *\n * The output frame will have the same packed format as the palette.\n *\n * @param src        source frame buffer\n * @param dst        destination frame buffer\n * @param num_pixels number of pixels to convert\n * @param palette    array with [256] entries, which must match color arrangement (RGB or BGR) of src\n */\nvoid sws_convertPalette8ToPacked32(const uint8_t *src, uint8_t *dst, int num_pixels, const uint8_t *palette);\n\n/**\n * Convert an 8-bit paletted frame into a frame with a color depth of 24 bits.\n *\n * With the palette format \"ABCD\", the destination frame ends up with the format \"ABC\".\n *\n * @param src        source frame buffer\n * @param dst        destination frame buffer\n * @param num_pixels number of pixels to convert\n * @param palette    array with [256] entries, which must match color arrangement (RGB or BGR) of src\n */\nvoid sws_convertPalette8ToPacked24(const uint8_t *src, uint8_t *dst, int num_pixels, const uint8_t *palette);\n\n/**\n * Get the AVClass for swsContext. It can be used in combination with\n * AV_OPT_SEARCH_FAKE_OBJ for examining options.\n *\n * @see av_opt_find().\n */\nconst AVClass *sws_get_class(void);\n\n/**\n * @}\n */\n\n#endif /* SWSCALE_SWSCALE_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/include/libswscale/version.h",
    "content": "/*\n * This file is part of FFmpeg.\n *\n * FFmpeg is free software; you can redistribute it and/or\n * modify it under the terms of the GNU Lesser General Public\n * License as published by the Free Software Foundation; either\n * version 2.1 of the License, or (at your option) any later version.\n *\n * FFmpeg is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n * Lesser General Public License for more details.\n *\n * You should have received a copy of the GNU Lesser General Public\n * License along with FFmpeg; if not, write to the Free Software\n * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n */\n\n#ifndef SWSCALE_VERSION_H\n#define SWSCALE_VERSION_H\n\n/**\n * @file\n * swscale version macros\n */\n\n#include \"libavutil/version.h\"\n\n#define LIBSWSCALE_VERSION_MAJOR   5\n#define LIBSWSCALE_VERSION_MINOR   1\n#define LIBSWSCALE_VERSION_MICRO 100\n\n#define LIBSWSCALE_VERSION_INT  AV_VERSION_INT(LIBSWSCALE_VERSION_MAJOR, \\\n                                               LIBSWSCALE_VERSION_MINOR, \\\n                                               LIBSWSCALE_VERSION_MICRO)\n#define LIBSWSCALE_VERSION      AV_VERSION(LIBSWSCALE_VERSION_MAJOR, \\\n                                           LIBSWSCALE_VERSION_MINOR, \\\n                                           LIBSWSCALE_VERSION_MICRO)\n#define LIBSWSCALE_BUILD        LIBSWSCALE_VERSION_INT\n\n#define LIBSWSCALE_IDENT        \"SwS\" AV_STRINGIFY(LIBSWSCALE_VERSION)\n\n/**\n * FF_API_* defines may be placed below to indicate public API that will be\n * dropped at a future version bump. The defines themselves are not part of\n * the public API and may change, break or disappear at any time.\n */\n\n#ifndef FF_API_SWS_VECTOR\n#define FF_API_SWS_VECTOR            (LIBSWSCALE_VERSION_MAJOR < 6)\n#endif\n\n#endif /* SWSCALE_VERSION_H */\n"
  },
  {
    "path": "ThirdParty/ffmpeg/lib/avcodec-58.def",
    "content": "EXPORTS\n    av_ac3_parse_header\n    av_adts_header_parse\n    av_bitstream_filter_close\n    av_bitstream_filter_filter\n    av_bitstream_filter_init\n    av_bitstream_filter_next\n    av_bsf_alloc\n    av_bsf_free\n    av_bsf_get_by_name\n    av_bsf_get_class\n    av_bsf_get_null_filter\n    av_bsf_init\n    av_bsf_iterate\n    av_bsf_list_alloc\n    av_bsf_list_append\n    av_bsf_list_append2\n    av_bsf_list_finalize\n    av_bsf_list_free\n    av_bsf_list_parse_str\n    av_bsf_next\n    av_bsf_receive_packet\n    av_bsf_send_packet\n    av_codec_ffversion\n    av_codec_get_chroma_intra_matrix\n    av_codec_get_codec_descriptor\n    av_codec_get_codec_properties\n    av_codec_get_lowres\n    av_codec_get_max_lowres\n    av_codec_get_pkt_timebase\n    av_codec_get_seek_preroll\n    av_codec_is_decoder\n    av_codec_is_encoder\n    av_codec_iterate\n    av_codec_next\n    av_codec_set_chroma_intra_matrix\n    av_codec_set_codec_descriptor\n    av_codec_set_lowres\n    av_codec_set_pkt_timebase\n    av_codec_set_seek_preroll\n    av_copy_packet\n    av_copy_packet_side_data\n    av_cpb_properties_alloc\n    av_d3d11va_alloc_context\n    av_dct_calc\n    av_dct_end\n    av_dct_init\n    av_dirac_parse_sequence_header\n    av_dup_packet\n    av_dv_codec_profile\n    av_dv_codec_profile2\n    av_dv_frame_profile\n    av_fast_padded_malloc\n    av_fast_padded_mallocz\n    av_fft_calc\n    av_fft_end\n    av_fft_init\n    av_fft_permute\n    av_free_packet\n    av_get_audio_frame_duration\n    av_get_audio_frame_duration2\n    av_get_bits_per_sample\n    av_get_codec_tag_string\n    av_get_exact_bits_per_sample\n    av_get_pcm_codec\n    av_get_profile_name\n    av_grow_packet\n    av_hwaccel_next\n    av_imdct_calc\n    av_imdct_half\n    av_init_packet\n    av_jni_get_java_vm\n    av_jni_set_java_vm\n    av_lockmgr_register\n    av_mdct_calc\n    av_mdct_end\n    av_mdct_init\n    av_mediacodec_alloc_context\n    av_mediacodec_default_free\n    av_mediacodec_default_init\n    av_mediacodec_release_buffer\n    av_new_packet\n    av_packet_add_side_data\n    av_packet_alloc\n    av_packet_clone\n    av_packet_copy_props\n    av_packet_free\n    av_packet_free_side_data\n    av_packet_from_data\n    av_packet_get_side_data\n    av_packet_make_refcounted\n    av_packet_make_writable\n    av_packet_merge_side_data\n    av_packet_move_ref\n    av_packet_new_side_data\n    av_packet_pack_dictionary\n    av_packet_ref\n    av_packet_rescale_ts\n    av_packet_shrink_side_data\n    av_packet_side_data_name\n    av_packet_split_side_data\n    av_packet_unpack_dictionary\n    av_packet_unref\n    av_parser_change\n    av_parser_close\n    av_parser_init\n    av_parser_iterate\n    av_parser_next\n    av_parser_parse2\n    av_picture_copy\n    av_picture_crop\n    av_picture_pad\n    av_qsv_alloc_context\n    av_rdft_calc\n    av_rdft_end\n    av_rdft_init\n    av_register_bitstream_filter\n    av_register_codec_parser\n    av_register_hwaccel\n    av_shrink_packet\n    av_vorbis_parse_frame\n    av_vorbis_parse_frame_flags\n    av_vorbis_parse_free\n    av_vorbis_parse_init\n    av_vorbis_parse_reset\n    av_xiphlacing\n    avcodec_align_dimensions\n    avcodec_align_dimensions2\n    avcodec_alloc_context3\n    avcodec_chroma_pos_to_enum\n    avcodec_close\n    avcodec_configuration\n    avcodec_copy_context\n    avcodec_dct_alloc\n    avcodec_dct_get_class\n    avcodec_dct_init\n    avcodec_decode_audio4\n    avcodec_decode_subtitle2\n    avcodec_decode_video2\n    avcodec_default_execute\n    avcodec_default_execute2\n    avcodec_default_get_buffer2\n    avcodec_default_get_format\n    avcodec_descriptor_get\n    avcodec_descriptor_get_by_name\n    avcodec_descriptor_next\n    avcodec_encode_audio2\n    avcodec_encode_subtitle\n    avcodec_encode_video2\n    avcodec_enum_to_chroma_pos\n    avcodec_fill_audio_frame\n    avcodec_find_best_pix_fmt2\n    avcodec_find_best_pix_fmt_of_2\n    avcodec_find_best_pix_fmt_of_list\n    avcodec_find_decoder\n    avcodec_find_decoder_by_name\n    avcodec_find_encoder\n    avcodec_find_encoder_by_name\n    avcodec_flush_buffers\n    avcodec_free_context\n    avcodec_get_chroma_sub_sample\n    avcodec_get_class\n    avcodec_get_context_defaults3\n    avcodec_get_frame_class\n    avcodec_get_hw_config\n    avcodec_get_hw_frames_parameters\n    avcodec_get_name\n    avcodec_get_pix_fmt_loss\n    avcodec_get_subtitle_rect_class\n    avcodec_get_type\n    avcodec_is_open\n    avcodec_license\n    avcodec_open2\n    avcodec_parameters_alloc\n    avcodec_parameters_copy\n    avcodec_parameters_free\n    avcodec_parameters_from_context\n    avcodec_parameters_to_context\n    avcodec_pix_fmt_to_codec_tag\n    avcodec_profile_name\n    avcodec_receive_frame\n    avcodec_receive_packet\n    avcodec_register\n    avcodec_register_all\n    avcodec_send_frame\n    avcodec_send_packet\n    avcodec_string\n    avcodec_version\n    avpicture_alloc\n    avpicture_fill\n    avpicture_free\n    avpicture_get_size\n    avpicture_layout\n    avpriv_ac3_channel_layout_tab\n    avpriv_ac3_parse_header\n    avpriv_align_put_bits\n    avpriv_bprint_to_extradata\n    avpriv_codec2_mode_bit_rate\n    avpriv_codec2_mode_block_align\n    avpriv_codec2_mode_frame_size\n    avpriv_codec_get_cap_skip_frame_fill_param\n    avpriv_copy_bits\n    avpriv_dca_convert_bitstream\n    avpriv_dca_parse_core_frame_header\n    avpriv_dca_sample_rates\n    avpriv_dnxhd_get_frame_size\n    avpriv_dnxhd_get_interlaced\n    avpriv_do_elbg\n    avpriv_exif_decode_ifd\n    avpriv_find_pix_fmt\n    avpriv_find_start_code\n    avpriv_fits_header_init\n    avpriv_fits_header_parse_line\n    avpriv_get_raw_pix_fmt_tags\n    avpriv_h264_has_num_reorder_frames\n    avpriv_init_elbg\n    avpriv_mjpeg_bits_ac_chrominance\n    avpriv_mjpeg_bits_ac_luminance\n    avpriv_mjpeg_bits_dc_chrominance\n    avpriv_mjpeg_bits_dc_luminance\n    avpriv_mjpeg_val_ac_chrominance\n    avpriv_mjpeg_val_ac_luminance\n    avpriv_mjpeg_val_dc\n    avpriv_mpa_bitrate_tab\n    avpriv_mpa_freq_tab\n    avpriv_mpeg4audio_get_config\n    avpriv_mpeg4audio_sample_rates\n    avpriv_mpegaudio_decode_header\n    avpriv_pix_fmt_bps_avi\n    avpriv_pix_fmt_bps_mov\n    avpriv_put_string\n    avpriv_split_xiph_headers\n    avpriv_tak_parse_streaminfo\n    avpriv_toupper4\n    avsubtitle_free\n"
  },
  {
    "path": "ThirdParty/ffmpeg/lib/avdevice-58.def",
    "content": "EXPORTS\n    av_device_capabilities\n    av_device_ffversion\n    av_input_audio_device_next\n    av_input_video_device_next\n    av_output_audio_device_next\n    av_output_video_device_next\n    avdevice_app_to_dev_control_message\n    avdevice_capabilities_create\n    avdevice_capabilities_free\n    avdevice_configuration\n    avdevice_dev_to_app_control_message\n    avdevice_free_list_devices\n    avdevice_license\n    avdevice_list_devices\n    avdevice_list_input_sources\n    avdevice_list_output_sinks\n    avdevice_register_all\n    avdevice_version\n"
  },
  {
    "path": "ThirdParty/ffmpeg/lib/avfilter-7.def",
    "content": "EXPORTS\n    av_abuffersink_params_alloc\n    av_buffersink_get_channel_layout\n    av_buffersink_get_channels\n    av_buffersink_get_format\n    av_buffersink_get_frame\n    av_buffersink_get_frame_flags\n    av_buffersink_get_frame_rate\n    av_buffersink_get_h\n    av_buffersink_get_hw_frames_ctx\n    av_buffersink_get_sample_aspect_ratio\n    av_buffersink_get_sample_rate\n    av_buffersink_get_samples\n    av_buffersink_get_time_base\n    av_buffersink_get_type\n    av_buffersink_get_w\n    av_buffersink_params_alloc\n    av_buffersink_set_frame_size\n    av_buffersrc_add_frame\n    av_buffersrc_add_frame_flags\n    av_buffersrc_close\n    av_buffersrc_get_nb_failed_requests\n    av_buffersrc_parameters_alloc\n    av_buffersrc_parameters_set\n    av_buffersrc_write_frame\n    av_filter_ffversion\n    av_filter_iterate\n    avfilter_add_matrix\n    avfilter_all_channel_layouts\n    avfilter_config_links\n    avfilter_configuration\n    avfilter_free\n    avfilter_get_by_name\n    avfilter_get_class\n    avfilter_get_matrix\n    avfilter_graph_alloc\n    avfilter_graph_alloc_filter\n    avfilter_graph_config\n    avfilter_graph_create_filter\n    avfilter_graph_dump\n    avfilter_graph_free\n    avfilter_graph_get_filter\n    avfilter_graph_parse\n    avfilter_graph_parse2\n    avfilter_graph_parse_ptr\n    avfilter_graph_queue_command\n    avfilter_graph_request_oldest\n    avfilter_graph_send_command\n    avfilter_graph_set_auto_convert\n    avfilter_init_dict\n    avfilter_init_str\n    avfilter_inout_alloc\n    avfilter_inout_free\n    avfilter_insert_filter\n    avfilter_license\n    avfilter_link\n    avfilter_link_free\n    avfilter_link_get_channels\n    avfilter_link_set_closed\n    avfilter_make_format64_list\n    avfilter_mul_matrix\n    avfilter_next\n    avfilter_pad_count\n    avfilter_pad_get_name\n    avfilter_pad_get_type\n    avfilter_process_command\n    avfilter_register\n    avfilter_register_all\n    avfilter_sub_matrix\n    avfilter_transform\n    avfilter_version\n"
  },
  {
    "path": "ThirdParty/ffmpeg/lib/avformat-58.def",
    "content": "EXPORTS\n    av_add_index_entry\n    av_append_packet\n    av_apply_bitstream_filters\n    av_codec_get_id\n    av_codec_get_tag\n    av_codec_get_tag2\n    av_demuxer_iterate\n    av_demuxer_open\n    av_dump_format\n    av_filename_number_test\n    av_find_best_stream\n    av_find_default_stream_index\n    av_find_input_format\n    av_find_program_from_stream\n    av_fmt_ctx_get_duration_estimation_method\n    av_format_ffversion\n    av_format_get_audio_codec\n    av_format_get_control_message_cb\n    av_format_get_data_codec\n    av_format_get_metadata_header_padding\n    av_format_get_opaque\n    av_format_get_open_cb\n    av_format_get_probe_score\n    av_format_get_subtitle_codec\n    av_format_get_video_codec\n    av_format_inject_global_side_data\n    av_format_set_audio_codec\n    av_format_set_control_message_cb\n    av_format_set_data_codec\n    av_format_set_metadata_header_padding\n    av_format_set_opaque\n    av_format_set_open_cb\n    av_format_set_subtitle_codec\n    av_format_set_video_codec\n    av_get_frame_filename\n    av_get_frame_filename2\n    av_get_output_timestamp\n    av_get_packet\n    av_guess_codec\n    av_guess_format\n    av_guess_frame_rate\n    av_guess_sample_aspect_ratio\n    av_hex_dump\n    av_hex_dump_log\n    av_iformat_next\n    av_index_search_timestamp\n    av_interleaved_write_frame\n    av_interleaved_write_uncoded_frame\n    av_match_ext\n    av_muxer_iterate\n    av_new_program\n    av_oformat_next\n    av_pkt_dump2\n    av_pkt_dump_log2\n    av_probe_input_buffer\n    av_probe_input_buffer2\n    av_probe_input_format\n    av_probe_input_format2\n    av_probe_input_format3\n    av_program_add_stream_index\n    av_read_frame\n    av_read_pause\n    av_read_play\n    av_register_all\n    av_register_input_format\n    av_register_output_format\n    av_sdp_create\n    av_seek_frame\n    av_stream_add_side_data\n    av_stream_get_codec_timebase\n    av_stream_get_end_pts\n    av_stream_get_parser\n    av_stream_get_r_frame_rate\n    av_stream_get_recommended_encoder_configuration\n    av_stream_get_side_data\n    av_stream_new_side_data\n    av_stream_set_r_frame_rate\n    av_stream_set_recommended_encoder_configuration\n    av_url_split\n    av_write_frame\n    av_write_trailer\n    av_write_uncoded_frame\n    av_write_uncoded_frame_query\n    avformat_alloc_context\n    avformat_alloc_output_context2\n    avformat_close_input\n    avformat_configuration\n    avformat_find_stream_info\n    avformat_flush\n    avformat_free_context\n    avformat_get_class\n    avformat_get_mov_audio_tags\n    avformat_get_mov_video_tags\n    avformat_get_riff_audio_tags\n    avformat_get_riff_video_tags\n    avformat_init_output\n    avformat_license\n    avformat_match_stream_specifier\n    avformat_network_deinit\n    avformat_network_init\n    avformat_new_stream\n    avformat_open_input\n    avformat_query_codec\n    avformat_queue_attached_pictures\n    avformat_seek_file\n    avformat_transfer_internal_stream_timing_info\n    avformat_version\n    avformat_write_header\n    avio_accept\n    avio_alloc_context\n    avio_check\n    avio_close\n    avio_close_dir\n    avio_close_dyn_buf\n    avio_closep\n    avio_context_free\n    avio_enum_protocols\n    avio_feof\n    avio_find_protocol_name\n    avio_flush\n    avio_free_directory_entry\n    avio_get_dyn_buf\n    avio_get_str\n    avio_get_str16be\n    avio_get_str16le\n    avio_handshake\n    avio_open\n    avio_open2\n    avio_open_dir\n    avio_open_dyn_buf\n    avio_pause\n    avio_printf\n    avio_put_str\n    avio_put_str16be\n    avio_put_str16le\n    avio_r8\n    avio_rb16\n    avio_rb24\n    avio_rb32\n    avio_rb64\n    avio_read\n    avio_read_dir\n    avio_read_partial\n    avio_read_to_bprint\n    avio_rl16\n    avio_rl24\n    avio_rl32\n    avio_rl64\n    avio_seek\n    avio_seek_time\n    avio_size\n    avio_skip\n    avio_w8\n    avio_wb16\n    avio_wb24\n    avio_wb32\n    avio_wb64\n    avio_wl16\n    avio_wl24\n    avio_wl32\n    avio_wl64\n    avio_write\n    avio_write_marker\n    avpriv_dv_get_packet\n    avpriv_dv_init_demux\n    avpriv_dv_produce_packet\n    avpriv_io_delete\n    avpriv_io_move\n    avpriv_mpegts_parse_close\n    avpriv_mpegts_parse_open\n    avpriv_mpegts_parse_packet\n    avpriv_new_chapter\n    avpriv_register_devices\n    avpriv_set_pts_info\n"
  },
  {
    "path": "ThirdParty/ffmpeg/lib/avutil-56.def",
    "content": "EXPORTS\n    av_add_i\n    av_add_q\n    av_add_stable\n    av_adler32_update\n    av_aes_alloc\n    av_aes_crypt\n    av_aes_ctr_alloc\n    av_aes_ctr_crypt\n    av_aes_ctr_free\n    av_aes_ctr_get_iv\n    av_aes_ctr_increment_iv\n    av_aes_ctr_init\n    av_aes_ctr_set_full_iv\n    av_aes_ctr_set_iv\n    av_aes_ctr_set_random_iv\n    av_aes_init\n    av_aes_size\n    av_append_path_component\n    av_asprintf\n    av_assert0_fpu\n    av_audio_fifo_alloc\n    av_audio_fifo_drain\n    av_audio_fifo_free\n    av_audio_fifo_peek\n    av_audio_fifo_peek_at\n    av_audio_fifo_read\n    av_audio_fifo_realloc\n    av_audio_fifo_reset\n    av_audio_fifo_size\n    av_audio_fifo_space\n    av_audio_fifo_write\n    av_base64_decode\n    av_base64_encode\n    av_basename\n    av_blowfish_alloc\n    av_blowfish_crypt\n    av_blowfish_crypt_ecb\n    av_blowfish_init\n    av_bmg_get\n    av_bprint_append_data\n    av_bprint_channel_layout\n    av_bprint_chars\n    av_bprint_clear\n    av_bprint_escape\n    av_bprint_finalize\n    av_bprint_get_buffer\n    av_bprint_init\n    av_bprint_init_for_buffer\n    av_bprint_strftime\n    av_bprintf\n    av_buffer_alloc\n    av_buffer_allocz\n    av_buffer_create\n    av_buffer_default_free\n    av_buffer_get_opaque\n    av_buffer_get_ref_count\n    av_buffer_is_writable\n    av_buffer_make_writable\n    av_buffer_pool_get\n    av_buffer_pool_init\n    av_buffer_pool_init2\n    av_buffer_pool_uninit\n    av_buffer_realloc\n    av_buffer_ref\n    av_buffer_unref\n    av_calloc\n    av_camellia_alloc\n    av_camellia_crypt\n    av_camellia_init\n    av_camellia_size\n    av_cast5_alloc\n    av_cast5_crypt\n    av_cast5_crypt2\n    av_cast5_init\n    av_cast5_size\n    av_channel_layout_extract_channel\n    av_chroma_location_from_name\n    av_chroma_location_name\n    av_cmp_i\n    av_color_primaries_from_name\n    av_color_primaries_name\n    av_color_range_from_name\n    av_color_range_name\n    av_color_space_from_name\n    av_color_space_name\n    av_color_transfer_from_name\n    av_color_transfer_name\n    av_compare_mod\n    av_compare_ts\n    av_content_light_metadata_alloc\n    av_content_light_metadata_create_side_data\n    av_cpu_count\n    av_cpu_max_align\n    av_crc\n    av_crc_get_table\n    av_crc_init\n    av_d2q\n    av_d2str\n    av_default_get_category\n    av_default_item_name\n    av_des_alloc\n    av_des_crypt\n    av_des_init\n    av_des_mac\n    av_dict_copy\n    av_dict_count\n    av_dict_free\n    av_dict_get\n    av_dict_get_string\n    av_dict_parse_string\n    av_dict_set\n    av_dict_set_int\n    av_dirname\n    av_display_matrix_flip\n    av_display_rotation_get\n    av_display_rotation_set\n    av_div_i\n    av_div_q\n    av_downmix_info_update_side_data\n    av_dynarray2_add\n    av_dynarray_add\n    av_dynarray_add_nofree\n    av_encryption_info_add_side_data\n    av_encryption_info_alloc\n    av_encryption_info_clone\n    av_encryption_info_free\n    av_encryption_info_get_side_data\n    av_encryption_init_info_add_side_data\n    av_encryption_init_info_alloc\n    av_encryption_init_info_free\n    av_encryption_init_info_get_side_data\n    av_escape\n    av_expr_eval\n    av_expr_free\n    av_expr_parse\n    av_expr_parse_and_eval\n    av_fast_malloc\n    av_fast_mallocz\n    av_fast_realloc\n    av_fifo_alloc\n    av_fifo_alloc_array\n    av_fifo_drain\n    av_fifo_free\n    av_fifo_freep\n    av_fifo_generic_peek\n    av_fifo_generic_peek_at\n    av_fifo_generic_read\n    av_fifo_generic_write\n    av_fifo_grow\n    av_fifo_realloc2\n    av_fifo_reset\n    av_fifo_size\n    av_fifo_space\n    av_file_map\n    av_file_unmap\n    av_find_best_pix_fmt_of_2\n    av_find_info_tag\n    av_find_nearest_q_idx\n    av_fopen_utf8\n    av_force_cpu_flags\n    av_fourcc_make_string\n    av_frame_alloc\n    av_frame_apply_cropping\n    av_frame_clone\n    av_frame_copy\n    av_frame_copy_props\n    av_frame_free\n    av_frame_get_best_effort_timestamp\n    av_frame_get_buffer\n    av_frame_get_channel_layout\n    av_frame_get_channels\n    av_frame_get_color_range\n    av_frame_get_colorspace\n    av_frame_get_decode_error_flags\n    av_frame_get_metadata\n    av_frame_get_pkt_duration\n    av_frame_get_pkt_pos\n    av_frame_get_pkt_size\n    av_frame_get_plane_buffer\n    av_frame_get_qp_table\n    av_frame_get_sample_rate\n    av_frame_get_side_data\n    av_frame_is_writable\n    av_frame_make_writable\n    av_frame_move_ref\n    av_frame_new_side_data\n    av_frame_new_side_data_from_buf\n    av_frame_ref\n    av_frame_remove_side_data\n    av_frame_set_best_effort_timestamp\n    av_frame_set_channel_layout\n    av_frame_set_channels\n    av_frame_set_color_range\n    av_frame_set_colorspace\n    av_frame_set_decode_error_flags\n    av_frame_set_metadata\n    av_frame_set_pkt_duration\n    av_frame_set_pkt_pos\n    av_frame_set_pkt_size\n    av_frame_set_qp_table\n    av_frame_set_sample_rate\n    av_frame_side_data_name\n    av_frame_unref\n    av_free\n    av_freep\n    av_gcd\n    av_get_alt_sample_fmt\n    av_get_bits_per_pixel\n    av_get_bytes_per_sample\n    av_get_channel_description\n    av_get_channel_layout\n    av_get_channel_layout_channel_index\n    av_get_channel_layout_nb_channels\n    av_get_channel_layout_string\n    av_get_channel_name\n    av_get_colorspace_name\n    av_get_cpu_flags\n    av_get_default_channel_layout\n    av_get_extended_channel_layout\n    av_get_known_color_name\n    av_get_media_type_string\n    av_get_packed_sample_fmt\n    av_get_padded_bits_per_pixel\n    av_get_picture_type_char\n    av_get_pix_fmt\n    av_get_pix_fmt_loss\n    av_get_pix_fmt_name\n    av_get_pix_fmt_string\n    av_get_planar_sample_fmt\n    av_get_random_seed\n    av_get_sample_fmt\n    av_get_sample_fmt_name\n    av_get_sample_fmt_string\n    av_get_standard_channel_layout\n    av_get_time_base_q\n    av_get_token\n    av_gettime\n    av_gettime_relative\n    av_gettime_relative_is_monotonic\n    av_hash_alloc\n    av_hash_final\n    av_hash_final_b64\n    av_hash_final_bin\n    av_hash_final_hex\n    av_hash_freep\n    av_hash_get_name\n    av_hash_get_size\n    av_hash_init\n    av_hash_names\n    av_hash_update\n    av_hmac_alloc\n    av_hmac_calc\n    av_hmac_final\n    av_hmac_free\n    av_hmac_init\n    av_hmac_update\n    av_hwdevice_ctx_alloc\n    av_hwdevice_ctx_create\n    av_hwdevice_ctx_create_derived\n    av_hwdevice_ctx_init\n    av_hwdevice_find_type_by_name\n    av_hwdevice_get_hwframe_constraints\n    av_hwdevice_get_type_name\n    av_hwdevice_hwconfig_alloc\n    av_hwdevice_iterate_types\n    av_hwframe_constraints_free\n    av_hwframe_ctx_alloc\n    av_hwframe_ctx_create_derived\n    av_hwframe_ctx_init\n    av_hwframe_get_buffer\n    av_hwframe_map\n    av_hwframe_transfer_data\n    av_hwframe_transfer_get_formats\n    av_i2int\n    av_image_alloc\n    av_image_check_sar\n    av_image_check_size\n    av_image_check_size2\n    av_image_copy\n    av_image_copy_plane\n    av_image_copy_to_buffer\n    av_image_copy_uc_from\n    av_image_fill_arrays\n    av_image_fill_black\n    av_image_fill_linesizes\n    av_image_fill_max_pixsteps\n    av_image_fill_pointers\n    av_image_get_buffer_size\n    av_image_get_linesize\n    av_int2i\n    av_int_list_length_for_size\n    av_lfg_init\n    av_lfg_init_from_data\n    av_log\n    av_log2\n    av_log2_16bit\n    av_log2_i\n    av_log_default_callback\n    av_log_format_line\n    av_log_format_line2\n    av_log_get_flags\n    av_log_get_level\n    av_log_set_callback\n    av_log_set_flags\n    av_log_set_level\n    av_lzo1x_decode\n    av_malloc\n    av_malloc_array\n    av_mallocz\n    av_mallocz_array\n    av_mastering_display_metadata_alloc\n    av_mastering_display_metadata_create_side_data\n    av_match_list\n    av_match_name\n    av_max_alloc\n    av_md5_alloc\n    av_md5_final\n    av_md5_init\n    av_md5_size\n    av_md5_sum\n    av_md5_update\n    av_memcpy_backptr\n    av_memdup\n    av_mod_i\n    av_mul_i\n    av_mul_q\n    av_murmur3_alloc\n    av_murmur3_final\n    av_murmur3_init\n    av_murmur3_init_seeded\n    av_murmur3_update\n    av_nearer_q\n    av_opt_child_class_next\n    av_opt_child_next\n    av_opt_copy\n    av_opt_eval_double\n    av_opt_eval_flags\n    av_opt_eval_float\n    av_opt_eval_int\n    av_opt_eval_int64\n    av_opt_eval_q\n    av_opt_find\n    av_opt_find2\n    av_opt_flag_is_set\n    av_opt_free\n    av_opt_freep_ranges\n    av_opt_get\n    av_opt_get_channel_layout\n    av_opt_get_dict_val\n    av_opt_get_double\n    av_opt_get_image_size\n    av_opt_get_int\n    av_opt_get_key_value\n    av_opt_get_pixel_fmt\n    av_opt_get_q\n    av_opt_get_sample_fmt\n    av_opt_get_video_rate\n    av_opt_is_set_to_default\n    av_opt_is_set_to_default_by_name\n    av_opt_next\n    av_opt_ptr\n    av_opt_query_ranges\n    av_opt_query_ranges_default\n    av_opt_serialize\n    av_opt_set\n    av_opt_set_bin\n    av_opt_set_channel_layout\n    av_opt_set_defaults\n    av_opt_set_defaults2\n    av_opt_set_dict\n    av_opt_set_dict2\n    av_opt_set_dict_val\n    av_opt_set_double\n    av_opt_set_from_string\n    av_opt_set_image_size\n    av_opt_set_int\n    av_opt_set_pixel_fmt\n    av_opt_set_q\n    av_opt_set_sample_fmt\n    av_opt_set_video_rate\n    av_opt_show2\n    av_parse_color\n    av_parse_cpu_caps\n    av_parse_cpu_flags\n    av_parse_ratio\n    av_parse_time\n    av_parse_video_rate\n    av_parse_video_size\n    av_pix_fmt_count_planes\n    av_pix_fmt_desc_get\n    av_pix_fmt_desc_get_id\n    av_pix_fmt_desc_next\n    av_pix_fmt_get_chroma_sub_sample\n    av_pix_fmt_swap_endianness\n    av_pixelutils_get_sad_fn\n    av_q2intfloat\n    av_rc4_alloc\n    av_rc4_crypt\n    av_rc4_init\n    av_read_image_line\n    av_realloc\n    av_realloc_array\n    av_realloc_f\n    av_reallocp\n    av_reallocp_array\n    av_reduce\n    av_rescale\n    av_rescale_delta\n    av_rescale_q\n    av_rescale_q_rnd\n    av_rescale_rnd\n    av_ripemd_alloc\n    av_ripemd_final\n    av_ripemd_init\n    av_ripemd_size\n    av_ripemd_update\n    av_sample_fmt_is_planar\n    av_samples_alloc\n    av_samples_alloc_array_and_samples\n    av_samples_copy\n    av_samples_fill_arrays\n    av_samples_get_buffer_size\n    av_samples_set_silence\n    av_set_cpu_flags_mask\n    av_set_options_string\n    av_sha512_alloc\n    av_sha512_final\n    av_sha512_init\n    av_sha512_size\n    av_sha512_update\n    av_sha_alloc\n    av_sha_final\n    av_sha_init\n    av_sha_size\n    av_sha_update\n    av_shr_i\n    av_small_strptime\n    av_spherical_alloc\n    av_spherical_from_name\n    av_spherical_projection_name\n    av_spherical_tile_bounds\n    av_stereo3d_alloc\n    av_stereo3d_create_side_data\n    av_stereo3d_from_name\n    av_stereo3d_type_name\n    av_strcasecmp\n    av_strdup\n    av_strerror\n    av_strireplace\n    av_stristart\n    av_stristr\n    av_strlcat\n    av_strlcatf\n    av_strlcpy\n    av_strncasecmp\n    av_strndup\n    av_strnstr\n    av_strstart\n    av_strtod\n    av_strtok\n    av_sub_i\n    av_sub_q\n    av_tea_alloc\n    av_tea_crypt\n    av_tea_init\n    av_tea_size\n    av_tempfile\n    av_thread_message_flush\n    av_thread_message_queue_alloc\n    av_thread_message_queue_free\n    av_thread_message_queue_recv\n    av_thread_message_queue_send\n    av_thread_message_queue_set_err_recv\n    av_thread_message_queue_set_err_send\n    av_thread_message_queue_set_free_func\n    av_timecode_adjust_ntsc_framenum2\n    av_timecode_check_frame_rate\n    av_timecode_get_smpte_from_framenum\n    av_timecode_init\n    av_timecode_init_from_string\n    av_timecode_make_mpeg_tc_string\n    av_timecode_make_smpte_tc_string\n    av_timecode_make_string\n    av_timegm\n    av_tree_destroy\n    av_tree_enumerate\n    av_tree_find\n    av_tree_insert\n    av_tree_node_alloc\n    av_tree_node_size\n    av_twofish_alloc\n    av_twofish_crypt\n    av_twofish_init\n    av_twofish_size\n    av_usleep\n    av_utf8_decode\n    av_util_ffversion\n    av_vbprintf\n    av_version_info\n    av_vlog\n    av_write_image_line\n    av_xtea_alloc\n    av_xtea_crypt\n    av_xtea_init\n    av_xtea_le_crypt\n    av_xtea_le_init\n    avpriv_alloc_fixed_dsp\n    avpriv_cga_font\n    avpriv_dict_set_timestamp\n    avpriv_float_dsp_alloc\n    avpriv_get_gamma_from_trc\n    avpriv_get_trc_function_from_trc\n    avpriv_init_lls\n    avpriv_open\n    avpriv_report_missing_feature\n    avpriv_request_sample\n    avpriv_scalarproduct_float_c\n    avpriv_set_systematic_pal2\n    avpriv_slicethread_create\n    avpriv_slicethread_execute\n    avpriv_slicethread_free\n    avpriv_solve_lls\n    avpriv_tempfile\n    avpriv_vga16_font\n    avutil_configuration\n    avutil_license\n    avutil_version\n"
  },
  {
    "path": "ThirdParty/ffmpeg/lib/postproc-55.def",
    "content": "EXPORTS\n    postproc_configuration\n    postproc_ffversion\n    postproc_license\n    postproc_version\n    pp_free_context\n    pp_free_mode\n    pp_get_context\n    pp_get_mode_by_name_and_quality\n    pp_help\n    pp_postprocess\n"
  },
  {
    "path": "ThirdParty/ffmpeg/lib/swresample-3.def",
    "content": "EXPORTS\n    swr_alloc\n    swr_alloc_set_opts\n    swr_build_matrix\n    swr_close\n    swr_config_frame\n    swr_convert\n    swr_convert_frame\n    swr_drop_output\n    swr_ffversion\n    swr_free\n    swr_get_class\n    swr_get_delay\n    swr_get_out_samples\n    swr_init\n    swr_inject_silence\n    swr_is_initialized\n    swr_next_pts\n    swr_set_channel_mapping\n    swr_set_compensation\n    swr_set_matrix\n    swresample_configuration\n    swresample_license\n    swresample_version\n"
  },
  {
    "path": "ThirdParty/ffmpeg/lib/swscale-5.def",
    "content": "EXPORTS\n    sws_addVec\n    sws_allocVec\n    sws_alloc_context\n    sws_alloc_set_opts\n    sws_cloneVec\n    sws_convVec\n    sws_convertPalette8ToPacked24\n    sws_convertPalette8ToPacked32\n    sws_freeContext\n    sws_freeFilter\n    sws_freeVec\n    sws_getCachedContext\n    sws_getCoefficients\n    sws_getColorspaceDetails\n    sws_getConstVec\n    sws_getContext\n    sws_getDefaultFilter\n    sws_getGaussianVec\n    sws_getIdentityVec\n    sws_get_class\n    sws_init_context\n    sws_isSupportedEndiannessConversion\n    sws_isSupportedInput\n    sws_isSupportedOutput\n    sws_normalizeVec\n    sws_printVec2\n    sws_scale\n    sws_scaleVec\n    sws_setColorspaceDetails\n    sws_shiftVec\n    sws_subVec\n    swscale_configuration\n    swscale_license\n    swscale_version\n"
  },
  {
    "path": "ThirdParty/libopusenc/AUTHORS",
    "content": "Jean-Marc Valin <jmvalin@jmvalin.ca>\nTimothy B. Terriberry <tterribe@xiph.org>\nRalph Giles <giles@xiph.org>\n"
  },
  {
    "path": "ThirdParty/libopusenc/include/opusenc.h",
    "content": "/* Copyright (c) 2017 Jean-Marc Valin */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#if !defined(_opusenc_h)\n# define _opusenc_h (1)\n\n/**\\mainpage\n   \\section Introduction\n\n   This is the documentation for the <tt>libopusenc</tt> C API.\n\n   The <tt>libopusenc</tt> package provides a convenient high-level API for\n   encoding Ogg Opus files.\n\n   \\section Organization\n\n   The main API is divided into several sections:\n   - \\ref encoding\n   - \\ref comments\n   - \\ref encoder_ctl\n   - \\ref callbacks\n   - \\ref error_codes\n\n   \\section Overview\n\n   The <tt>libopusfile</tt> API provides an easy way to encode Ogg Opus files using\n   <tt>libopus</tt>.\n*/\n\n# if defined(__cplusplus)\nextern \"C\" {\n# endif\n\n#include <opus.h>\n\n#ifndef OPE_EXPORT\n# if defined(WIN32)\n#  if defined(OPE_BUILD) && defined(DLL_EXPORT)\n#   define OPE_EXPORT __declspec(dllexport)\n#  else\n#   define OPE_EXPORT\n#  endif\n# elif defined(__GNUC__) && defined(OPE_BUILD)\n#  define OPE_EXPORT __attribute__ ((visibility (\"default\")))\n# else\n#  define OPE_EXPORT\n# endif\n#endif\n\n/**\\defgroup error_codes Error Codes*/\n/*@{*/\n/**\\name List of possible error codes\n   Many of the functions in this library return a negative error code when a\n    function fails.\n   This list provides a brief explanation of the common errors.\n   See each individual function for more details on what a specific error code\n    means in that context.*/\n/*@{*/\n\n\n/* Bump this when we change the API. */\n/** API version for this header. Can be used to check for features at compile time. */\n#define OPE_API_VERSION 0\n\n#define OPE_OK 0\n/* Based on the relevant libopus code minus 10. */\n#define OPE_BAD_ARG -11\n#define OPE_INTERNAL_ERROR -13\n#define OPE_UNIMPLEMENTED -15\n#define OPE_ALLOC_FAIL -17\n\n/* Specific to libopusenc. */\n#define OPE_CANNOT_OPEN -30\n#define OPE_TOO_LATE -31\n#define OPE_UNRECOVERABLE -32\n#define OPE_INVALID_PICTURE -33\n#define OPE_INVALID_ICON -34\n/*@}*/\n/*@}*/\n\n\n/* These are the \"raw\" request values -- they should usually not be used. */\n#define OPE_SET_DECISION_DELAY_REQUEST      14000\n#define OPE_GET_DECISION_DELAY_REQUEST      14001\n#define OPE_SET_MUXING_DELAY_REQUEST        14002\n#define OPE_GET_MUXING_DELAY_REQUEST        14003\n#define OPE_SET_COMMENT_PADDING_REQUEST     14004\n#define OPE_GET_COMMENT_PADDING_REQUEST     14005\n#define OPE_SET_SERIALNO_REQUEST            14006\n#define OPE_GET_SERIALNO_REQUEST            14007\n#define OPE_SET_PACKET_CALLBACK_REQUEST     14008\n/*#define OPE_GET_PACKET_CALLBACK_REQUEST     14009*/\n#define OPE_SET_HEADER_GAIN_REQUEST         14010\n#define OPE_GET_HEADER_GAIN_REQUEST         14011\n\n/**\\defgroup encoder_ctl Encoding Options*/\n/*@{*/\n\n/**\\name Control parameters\n\n   Macros for setting encoder options.*/\n/*@{*/\n\n#define OPE_SET_DECISION_DELAY(x) OPE_SET_DECISION_DELAY_REQUEST, __opus_check_int(x)\n#define OPE_GET_DECISION_DELAY(x) OPE_GET_DECISION_DELAY_REQUEST, __opus_check_int_ptr(x)\n#define OPE_SET_MUXING_DELAY(x) OPE_SET_MUXING_DELAY_REQUEST, __opus_check_int(x)\n#define OPE_GET_MUXING_DELAY(x) OPE_GET_MUXING_DELAY_REQUEST, __opus_check_int_ptr(x)\n#define OPE_SET_COMMENT_PADDING(x) OPE_SET_COMMENT_PADDING_REQUEST, __opus_check_int(x)\n#define OPE_GET_COMMENT_PADDING(x) OPE_GET_COMMENT_PADDING_REQUEST, __opus_check_int_ptr(x)\n#define OPE_SET_SERIALNO(x) OPE_SET_SERIALNO_REQUEST, __opus_check_int(x)\n#define OPE_GET_SERIALNO(x) OPE_GET_SERIALNO_REQUEST, __opus_check_int_ptr(x)\n/* FIXME: Add type-checking macros to these. */\n#define OPE_SET_PACKET_CALLBACK(x,u) OPE_SET_PACKET_CALLBACK_REQUEST, (x), (u)\n/*#define OPE_GET_PACKET_CALLBACK(x,u) OPE_GET_PACKET_CALLBACK_REQUEST, (x), (u)*/\n#define OPE_SET_HEADER_GAIN(x,u) OPE_SET_HEADER_GAIN_REQUEST, __opus_check_int(x)\n#define OPE_GET_HEADER_GAIN(x,u) OPE_GET_HEADER_GAIN_REQUEST, __opus_check_int_ptr(x)\n/*@}*/\n/*@}*/\n\n/**\\defgroup callbacks Callback Functions */\n/*@{*/\n\n/**\\name Callback functions\n\n   These are the callbacks that can be implemented for an encoder.*/\n/*@{*/\n\n/** Called for writing a page. */\ntypedef int (*ope_write_func)(void *user_data, const unsigned char *ptr, opus_int32 len);\n\n/** Called for closing a stream. */\ntypedef int (*ope_close_func)(void *user_data);\n\n/** Called on every packet encoded (including header). */\ntypedef int (*ope_packet_func)(void *user_data, const unsigned char *packet_ptr, opus_int32 packet_len, opus_uint32 flags);\n\n/** Callback functions for accessing the stream. */\ntypedef struct {\n  /** Callback for writing to the stream. */\n  ope_write_func write;\n  /** Callback for closing the stream. */\n  ope_close_func close;\n} OpusEncCallbacks;\n/*@}*/\n/*@}*/\n\n/** Opaque comments struct. */\ntypedef struct OggOpusComments OggOpusComments;\n\n/** Opaque encoder struct. */\ntypedef struct OggOpusEnc OggOpusEnc;\n\n/**\\defgroup comments Comments Handling */\n/*@{*/\n\n/**\\name Functions for handling comments\n\n   These functions make it possible to add comments and pictures to Ogg Opus files.*/\n/*@{*/\n\n/** Create a new comments object. \n    \\return Newly-created comments object. */\nOPE_EXPORT OggOpusComments *ope_comments_create(void);\n\n/** Create a deep copy of a comments object.\n    \\param comments Comments object to copy\n    \\return Deep copy of input. */\nOPE_EXPORT OggOpusComments *ope_comments_copy(OggOpusComments *comments);\n\n/** Destroys a comments object. \n    \\param comments Comments object to destroy*/\nOPE_EXPORT void ope_comments_destroy(OggOpusComments *comments);\n\n/** Add a comment. \n    \\param[in,out] comments Where to add the comments\n    \\param         tag      Tag for the comment (must not contain = char)\n    \\param         val      Value for the tag\n    \\return Error code\n */\nOPE_EXPORT int ope_comments_add(OggOpusComments *comments, const char *tag, const char *val);\n\n/** Add a comment as a single tag=value string. \n    \\param[in,out] comments    Where to add the comments\n    \\param         tag_and_val string of the form tag=value (must contain = char)\n    \\return Error code\n */\nOPE_EXPORT int ope_comments_add_string(OggOpusComments *comments, const char *tag_and_val);\n\n/** Add a picture. \n    \\param[in,out] comments     Where to add the comments\n    \\param         filename     File name for the picture\n    \\param         picture_type Type of picture (-1 for default)\n    \\param         description  Description (NULL means no comment)\n    \\return Error code\n */\nOPE_EXPORT int ope_comments_add_picture(OggOpusComments *comments, const char *filename, int picture_type, const char *description);\n\n/*@}*/\n/*@}*/\n\n/**\\defgroup encoding Encoding */\n/*@{*/\n\n/**\\name Functions for encoding Ogg Opus files\n\n   These functions make it possible to encode Ogg Opus files.*/\n/*@{*/\n\n/** Create a new OggOpus file.\n    \\param path       Path where to create the file\n    \\param comments   Comments associated with the stream\n    \\param rate       Input sampling rate (48 kHz is faster)\n    \\param channels   Number of channels\n    \\param family     Mapping family (0 for mono/stereo, 1 for surround)\n    \\param[out] error Error code (NULL if no error is to be returned)\n    \\return Newly-created encoder.\n    */\nOPE_EXPORT OggOpusEnc *ope_encoder_create_file(const char *path, OggOpusComments *comments, opus_int32 rate, int channels, int family, int *error);\n\n/** Create a new OggOpus stream to be handled using callbacks\n    \\param callbacks  Callback functions\n    \\param user_data  Pointer to be associated with the stream and passed to the callbacks\n    \\param comments   Comments associated with the stream\n    \\param rate       Input sampling rate (48 kHz is faster)\n    \\param channels   Number of channels\n    \\param family     Mapping family (0 for mono/stereo, 1 for surround)\n    \\param[out] error Error code (NULL if no error is to be returned)\n    \\return Newly-created encoder.\n    */\nOPE_EXPORT OggOpusEnc *ope_encoder_create_callbacks(const OpusEncCallbacks *callbacks, void *user_data,\n    OggOpusComments *comments, opus_int32 rate, int channels, int family, int *error);\n\n/** Create a new OggOpus stream to be used along with.ope_encoder_get_page().\n  This is mostly useful for muxing with other streams.\n    \\param comments   Comments associated with the stream\n    \\param rate       Input sampling rate (48 kHz is faster)\n    \\param channels   Number of channels\n    \\param family     Mapping family (0 for mono/stereo, 1 for surround)\n    \\param[out] error Error code (NULL if no error is to be returned)\n    \\return Newly-created encoder.\n    */\nOPE_EXPORT OggOpusEnc *ope_encoder_create_pull(OggOpusComments *comments, opus_int32 rate, int channels, int family, int *error);\n\n/** Add/encode any number of float samples to the stream.\n    \\param[in,out] enc         Encoder\n    \\param pcm                 Floating-point PCM values in the +/-1 range (interleaved if multiple channels)\n    \\param samples_per_channel Number of samples for each channel\n    \\return Error code*/\nOPE_EXPORT int ope_encoder_write_float(OggOpusEnc *enc, const float *pcm, int samples_per_channel);\n\n/** Add/encode any number of 16-bit linear samples to the stream.\n    \\param[in,out] enc         Encoder\n    \\param pcm                 Linear 16-bit PCM values in the [-32768,32767] range (interleaved if multiple channels)\n    \\param samples_per_channel Number of samples for each channel\n    \\return Error code*/\nOPE_EXPORT int ope_encoder_write(OggOpusEnc *enc, const opus_int16 *pcm, int samples_per_channel);\n\n/** Get the next page from the stream (only if using ope_encoder_create_pull()).\n    \\param[in,out] enc Encoder\n    \\param[out] page   Next available encoded page\n    \\param[out] len    Size (in bytes) of the page returned\n    \\param flush       If non-zero, forces a flush of the page (if any data avaiable)\n    \\return 1 if there is a page available, 0 if not. */\nOPE_EXPORT int ope_encoder_get_page(OggOpusEnc *enc, unsigned char **page, opus_int32 *len, int flush);\n\n/** Finalizes the stream, but does not deallocate the object.\n    \\param[in,out] enc Encoder\n    \\return Error code\n */\nOPE_EXPORT int ope_encoder_drain(OggOpusEnc *enc);\n\n/** Deallocates the obect. Make sure to ope_drain() first.\n    \\param[in,out] enc Encoder\n */\nOPE_EXPORT void ope_encoder_destroy(OggOpusEnc *enc);\n\n/** Ends the stream and create a new stream within the same file.\n    \\param[in,out] enc Encoder\n    \\param comments   Comments associated with the stream\n    \\return Error code\n */\nOPE_EXPORT int ope_encoder_chain_current(OggOpusEnc *enc, OggOpusComments *comments);\n\n/** Ends the stream and create a new file.\n    \\param[in,out] enc Encoder\n    \\param path        Path where to write the new file\n    \\param comments    Comments associated with the stream\n    \\return Error code\n */\nOPE_EXPORT int ope_encoder_continue_new_file(OggOpusEnc *enc, const char *path, OggOpusComments *comments);\n\n/** Ends the stream and create a new file (callback-based).\n    \\param[in,out] enc Encoder\n    \\param user_data   Pointer to be associated with the new stream and passed to the callbacks\n    \\param comments    Comments associated with the stream\n    \\return Error code\n */\nOPE_EXPORT int ope_encoder_continue_new_callbacks(OggOpusEnc *enc, void *user_data, OggOpusComments *comments);\n\n/** Write out the header now rather than wait for audio to begin.\n    \\param[in,out] enc Encoder\n    \\return Error code\n */\nOPE_EXPORT int ope_encoder_flush_header(OggOpusEnc *enc);\n\n/** Sets encoder options.\n    \\param[in,out] enc Encoder\n    \\param request     Use a request macro\n    \\return Error code\n */\nOPE_EXPORT int ope_encoder_ctl(OggOpusEnc *enc, int request, ...);\n\n/** Converts a libopusenc error code into a human readable string.\n  *\n  * @param error Error number\n  * @returns Error string\n  */\nOPE_EXPORT const char *ope_strerror(int error);\n\n/** Returns a string representing the version of libopusenc being used at run time.\n    \\return A string describing the version of this library */\nOPE_EXPORT const char *ope_get_version_string(void);\n\n/** ABI version for this header. Can be used to check for features at run time.\n    \\return An integer representing the ABI version */\nOPE_EXPORT int ope_get_abi_version(void);\n\n/*@}*/\n/*@}*/\n\n# if defined(__cplusplus)\n}\n# endif\n\n#endif\n"
  },
  {
    "path": "ThirdParty/libopusenc/src/arch.h",
    "content": "/* Copyright (C) 2003 Jean-Marc Valin */\n/**\n   @file arch.h\n   @brief Various architecture definitions Speex\n*/\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   - Neither the name of the Xiph.org Foundation nor the names of its\n   contributors may be used to endorse or promote products derived from\n   this software without specific prior written permission.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR\n   CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifndef ARCH_H\n#define ARCH_H\n\n/* A couple test to catch stupid option combinations */\n#ifdef FIXED_POINT\n\n#ifdef FLOATING_POINT\n#error You cannot compile as floating point and fixed point at the same time\n#endif\n#ifdef _USE_SSE\n#error SSE is only for floating-point\n#endif\n#if ((defined (ARM4_ASM)||defined (ARM4_ASM)) && defined(BFIN_ASM)) || (defined (ARM4_ASM)&&defined(ARM5E_ASM))\n#error Make up your mind. What CPU do you have?\n#endif\n#ifdef VORBIS_PSYCHO\n#error Vorbis-psy model currently not implemented in fixed-point\n#endif\n\n#else\n\n#ifndef FLOATING_POINT\n#error You now need to define either FIXED_POINT or FLOATING_POINT\n#endif\n#if defined (ARM4_ASM) || defined(ARM5E_ASM) || defined(BFIN_ASM)\n#error I suppose you can have a [ARM4/ARM5E/Blackfin] that has float instructions?\n#endif\n#ifdef FIXED_POINT_DEBUG\n#error \"Don't you think enabling fixed-point is a good thing to do if you want to debug that?\"\n#endif\n\n\n#endif\n\n\n#define ABS(x) ((x) < 0 ? (-(x)) : (x))      /**< Absolute integer value. */\n#define ABS16(x) ((x) < 0 ? (-(x)) : (x))    /**< Absolute 16-bit value.  */\n#define MIN16(a,b) ((a) < (b) ? (a) : (b))   /**< Maximum 16-bit value.   */\n#define MAX16(a,b) ((a) > (b) ? (a) : (b))   /**< Maximum 16-bit value.   */\n#define ABS32(x) ((x) < 0 ? (-(x)) : (x))    /**< Absolute 32-bit value.  */\n#define MIN32(a,b) ((a) < (b) ? (a) : (b))   /**< Maximum 32-bit value.   */\n#define MAX32(a,b) ((a) > (b) ? (a) : (b))   /**< Maximum 32-bit value.   */\n\n#ifdef FIXED_POINT\n\ntypedef spx_int16_t spx_word16_t;\ntypedef spx_int32_t spx_word32_t;\ntypedef spx_word32_t spx_mem_t;\ntypedef spx_word16_t spx_coef_t;\ntypedef spx_word16_t spx_lsp_t;\ntypedef spx_word32_t spx_sig_t;\n\n#define Q15ONE 32767\n\n#define LPC_SCALING  8192\n#define SIG_SCALING  16384\n#define LSP_SCALING  8192.\n#define GAMMA_SCALING 32768.\n#define GAIN_SCALING 64\n#define GAIN_SCALING_1 0.015625\n\n#define LPC_SHIFT    13\n#define LSP_SHIFT    13\n#define SIG_SHIFT    14\n#define GAIN_SHIFT   6\n\n#define WORD2INT(x) ((x) < -32767 ? -32768 : ((x) > 32766 ? 32767 : (x)))\n\n#define VERY_SMALL 0\n#define VERY_LARGE32 ((spx_word32_t)2147483647)\n#define VERY_LARGE16 ((spx_word16_t)32767)\n#define Q15_ONE ((spx_word16_t)32767)\n\n\n#ifdef FIXED_DEBUG\n#include \"fixed_debug.h\"\n#else\n\n#include \"fixed_generic.h\"\n\n#ifdef ARM5E_ASM\n#include \"fixed_arm5e.h\"\n#elif defined (ARM4_ASM)\n#include \"fixed_arm4.h\"\n#elif defined (BFIN_ASM)\n#include \"fixed_bfin.h\"\n#endif\n\n#endif\n\n\n#else\n\ntypedef float spx_mem_t;\ntypedef float spx_coef_t;\ntypedef float spx_lsp_t;\ntypedef float spx_sig_t;\ntypedef float spx_word16_t;\ntypedef float spx_word32_t;\n\n#define Q15ONE 1.0f\n#define LPC_SCALING  1.f\n#define SIG_SCALING  1.f\n#define LSP_SCALING  1.f\n#define GAMMA_SCALING 1.f\n#define GAIN_SCALING 1.f\n#define GAIN_SCALING_1 1.f\n\n\n#define VERY_SMALL 1e-15f\n#define VERY_LARGE32 1e15f\n#define VERY_LARGE16 1e15f\n#define Q15_ONE ((spx_word16_t)1.f)\n\n#define QCONST16(x,bits) (x)\n#define QCONST32(x,bits) (x)\n\n#define NEG16(x) (-(x))\n#define NEG32(x) (-(x))\n#define EXTRACT16(x) (x)\n#define EXTEND32(x) (x)\n#define SHR16(a,shift) (a)\n#define SHL16(a,shift) (a)\n#define SHR32(a,shift) (a)\n#define SHL32(a,shift) (a)\n#define PSHR16(a,shift) (a)\n#define PSHR32(a,shift) (a)\n#define VSHR32(a,shift) (a)\n#define SATURATE16(x,a) (x)\n#define SATURATE32(x,a) (x)\n#define SATURATE32PSHR(x,shift,a) (x)\n\n#define PSHR(a,shift)       (a)\n#define SHR(a,shift)       (a)\n#define SHL(a,shift)       (a)\n#define SATURATE(x,a) (x)\n\n#define ADD16(a,b) ((a)+(b))\n#define SUB16(a,b) ((a)-(b))\n#define ADD32(a,b) ((a)+(b))\n#define SUB32(a,b) ((a)-(b))\n#define MULT16_16_16(a,b)     ((a)*(b))\n#define MULT16_16(a,b)     ((spx_word32_t)(a)*(spx_word32_t)(b))\n#define MAC16_16(c,a,b)     ((c)+(spx_word32_t)(a)*(spx_word32_t)(b))\n\n#define MULT16_32_Q11(a,b)     ((a)*(b))\n#define MULT16_32_Q13(a,b)     ((a)*(b))\n#define MULT16_32_Q14(a,b)     ((a)*(b))\n#define MULT16_32_Q15(a,b)     ((a)*(b))\n#define MULT16_32_P15(a,b)     ((a)*(b))\n\n#define MAC16_32_Q11(c,a,b)     ((c)+(a)*(b))\n#define MAC16_32_Q15(c,a,b)     ((c)+(a)*(b))\n\n#define MAC16_16_Q11(c,a,b)     ((c)+(a)*(b))\n#define MAC16_16_Q13(c,a,b)     ((c)+(a)*(b))\n#define MAC16_16_P13(c,a,b)     ((c)+(a)*(b))\n#define MULT16_16_Q11_32(a,b)     ((a)*(b))\n#define MULT16_16_Q13(a,b)     ((a)*(b))\n#define MULT16_16_Q14(a,b)     ((a)*(b))\n#define MULT16_16_Q15(a,b)     ((a)*(b))\n#define MULT16_16_P15(a,b)     ((a)*(b))\n#define MULT16_16_P13(a,b)     ((a)*(b))\n#define MULT16_16_P14(a,b)     ((a)*(b))\n\n#define DIV32_16(a,b)     (((spx_word32_t)(a))/(spx_word16_t)(b))\n#define PDIV32_16(a,b)     (((spx_word32_t)(a))/(spx_word16_t)(b))\n#define DIV32(a,b)     (((spx_word32_t)(a))/(spx_word32_t)(b))\n#define PDIV32(a,b)     (((spx_word32_t)(a))/(spx_word32_t)(b))\n\n#define WORD2INT(x) ((x) < -32767.5f ? -32768 : ((x) > 32766.5f ? 32767 : floor(.5+(x))))\n\n#endif\n\n\n#if defined (CONFIG_TI_C54X) || defined (CONFIG_TI_C55X)\n\n/* 2 on TI C5x DSP */\n#define BYTES_PER_CHAR 2\n#define BITS_PER_CHAR 16\n#define LOG2_BITS_PER_CHAR 4\n\n#else\n\n#define BYTES_PER_CHAR 1\n#define BITS_PER_CHAR 8\n#define LOG2_BITS_PER_CHAR 3\n\n#endif\n\n\n\n#ifdef FIXED_DEBUG\nextern long long spx_mips;\n#endif\n\n\n#endif\n"
  },
  {
    "path": "ThirdParty/libopusenc/src/ogg_packer.c",
    "content": "/* Copyright (c) 2017 Jean-Marc Valin\n   Copyright (c) 1994-2010 Xiph.Org Foundation */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#include <stdlib.h>\n#include <string.h>\n#include <assert.h>\n\n#include <stdio.h>\n#include \"ogg_packer.h\"\n\n#define MAX_HEADER_SIZE (27+255)\n\n#define MAX_PAGE_SIZE (255*255 + MAX_HEADER_SIZE)\n\nstatic const oggp_uint32 crc_lookup[256]={\n  0x00000000,0x04c11db7,0x09823b6e,0x0d4326d9,\n  0x130476dc,0x17c56b6b,0x1a864db2,0x1e475005,\n  0x2608edb8,0x22c9f00f,0x2f8ad6d6,0x2b4bcb61,\n  0x350c9b64,0x31cd86d3,0x3c8ea00a,0x384fbdbd,\n  0x4c11db70,0x48d0c6c7,0x4593e01e,0x4152fda9,\n  0x5f15adac,0x5bd4b01b,0x569796c2,0x52568b75,\n  0x6a1936c8,0x6ed82b7f,0x639b0da6,0x675a1011,\n  0x791d4014,0x7ddc5da3,0x709f7b7a,0x745e66cd,\n  0x9823b6e0,0x9ce2ab57,0x91a18d8e,0x95609039,\n  0x8b27c03c,0x8fe6dd8b,0x82a5fb52,0x8664e6e5,\n  0xbe2b5b58,0xbaea46ef,0xb7a96036,0xb3687d81,\n  0xad2f2d84,0xa9ee3033,0xa4ad16ea,0xa06c0b5d,\n  0xd4326d90,0xd0f37027,0xddb056fe,0xd9714b49,\n  0xc7361b4c,0xc3f706fb,0xceb42022,0xca753d95,\n  0xf23a8028,0xf6fb9d9f,0xfbb8bb46,0xff79a6f1,\n  0xe13ef6f4,0xe5ffeb43,0xe8bccd9a,0xec7dd02d,\n  0x34867077,0x30476dc0,0x3d044b19,0x39c556ae,\n  0x278206ab,0x23431b1c,0x2e003dc5,0x2ac12072,\n  0x128e9dcf,0x164f8078,0x1b0ca6a1,0x1fcdbb16,\n  0x018aeb13,0x054bf6a4,0x0808d07d,0x0cc9cdca,\n  0x7897ab07,0x7c56b6b0,0x71159069,0x75d48dde,\n  0x6b93dddb,0x6f52c06c,0x6211e6b5,0x66d0fb02,\n  0x5e9f46bf,0x5a5e5b08,0x571d7dd1,0x53dc6066,\n  0x4d9b3063,0x495a2dd4,0x44190b0d,0x40d816ba,\n  0xaca5c697,0xa864db20,0xa527fdf9,0xa1e6e04e,\n  0xbfa1b04b,0xbb60adfc,0xb6238b25,0xb2e29692,\n  0x8aad2b2f,0x8e6c3698,0x832f1041,0x87ee0df6,\n  0x99a95df3,0x9d684044,0x902b669d,0x94ea7b2a,\n  0xe0b41de7,0xe4750050,0xe9362689,0xedf73b3e,\n  0xf3b06b3b,0xf771768c,0xfa325055,0xfef34de2,\n  0xc6bcf05f,0xc27dede8,0xcf3ecb31,0xcbffd686,\n  0xd5b88683,0xd1799b34,0xdc3abded,0xd8fba05a,\n  0x690ce0ee,0x6dcdfd59,0x608edb80,0x644fc637,\n  0x7a089632,0x7ec98b85,0x738aad5c,0x774bb0eb,\n  0x4f040d56,0x4bc510e1,0x46863638,0x42472b8f,\n  0x5c007b8a,0x58c1663d,0x558240e4,0x51435d53,\n  0x251d3b9e,0x21dc2629,0x2c9f00f0,0x285e1d47,\n  0x36194d42,0x32d850f5,0x3f9b762c,0x3b5a6b9b,\n  0x0315d626,0x07d4cb91,0x0a97ed48,0x0e56f0ff,\n  0x1011a0fa,0x14d0bd4d,0x19939b94,0x1d528623,\n  0xf12f560e,0xf5ee4bb9,0xf8ad6d60,0xfc6c70d7,\n  0xe22b20d2,0xe6ea3d65,0xeba91bbc,0xef68060b,\n  0xd727bbb6,0xd3e6a601,0xdea580d8,0xda649d6f,\n  0xc423cd6a,0xc0e2d0dd,0xcda1f604,0xc960ebb3,\n  0xbd3e8d7e,0xb9ff90c9,0xb4bcb610,0xb07daba7,\n  0xae3afba2,0xaafbe615,0xa7b8c0cc,0xa379dd7b,\n  0x9b3660c6,0x9ff77d71,0x92b45ba8,0x9675461f,\n  0x8832161a,0x8cf30bad,0x81b02d74,0x857130c3,\n  0x5d8a9099,0x594b8d2e,0x5408abf7,0x50c9b640,\n  0x4e8ee645,0x4a4ffbf2,0x470cdd2b,0x43cdc09c,\n  0x7b827d21,0x7f436096,0x7200464f,0x76c15bf8,\n  0x68860bfd,0x6c47164a,0x61043093,0x65c52d24,\n  0x119b4be9,0x155a565e,0x18197087,0x1cd86d30,\n  0x029f3d35,0x065e2082,0x0b1d065b,0x0fdc1bec,\n  0x3793a651,0x3352bbe6,0x3e119d3f,0x3ad08088,\n  0x2497d08d,0x2056cd3a,0x2d15ebe3,0x29d4f654,\n  0xc5a92679,0xc1683bce,0xcc2b1d17,0xc8ea00a0,\n  0xd6ad50a5,0xd26c4d12,0xdf2f6bcb,0xdbee767c,\n  0xe3a1cbc1,0xe760d676,0xea23f0af,0xeee2ed18,\n  0xf0a5bd1d,0xf464a0aa,0xf9278673,0xfde69bc4,\n  0x89b8fd09,0x8d79e0be,0x803ac667,0x84fbdbd0,\n  0x9abc8bd5,0x9e7d9662,0x933eb0bb,0x97ffad0c,\n  0xafb010b1,0xab710d06,0xa6322bdf,0xa2f33668,\n  0xbcb4666d,0xb8757bda,0xb5365d03,0xb1f740b4};\n\nstatic void ogg_page_checksum_set(unsigned char *page, oggp_int32 len){\n  oggp_uint32 crc_reg=0;\n  oggp_int32 i;\n\n  /* safety; needed for API behavior, but not framing code */\n  page[22]=0;\n  page[23]=0;\n  page[24]=0;\n  page[25]=0;\n\n  for(i=0;i<len;i++) crc_reg=(crc_reg<<8)^crc_lookup[((crc_reg >> 24)&0xff)^page[i]];\n\n  page[22]=(unsigned char)(crc_reg&0xff);\n  page[23]=(unsigned char)((crc_reg>>8)&0xff);\n  page[24]=(unsigned char)((crc_reg>>16)&0xff);\n  page[25]=(unsigned char)((crc_reg>>24)&0xff);\n}\n\ntypedef struct {\n  oggp_uint64 granulepos;\n  size_t buf_pos;\n  size_t buf_size;\n  size_t lacing_pos;\n  size_t lacing_size;\n  int flags;\n  size_t pageno;\n} oggp_page;\n\nstruct oggpacker {\n  oggp_int32 serialno;\n  unsigned char *buf;\n  unsigned char *alloc_buf;\n  unsigned char *user_buf;\n  size_t buf_size;\n  size_t buf_fill;\n  size_t buf_begin;\n  unsigned char *lacing;\n  size_t lacing_size;\n  size_t lacing_fill;\n  size_t lacing_begin;\n  oggp_page *pages;\n  size_t pages_size;\n  size_t pages_fill;\n  oggp_uint64 muxing_delay;\n  int is_eos;\n  oggp_uint64 curr_granule;\n  oggp_uint64 last_granule;\n  size_t pageno;\n};\n\n/** Allocates an oggpacker object */\noggpacker *oggp_create(oggp_int32 serialno) {\n  oggpacker *oggp;\n  oggp = malloc(sizeof(*oggp));\n  if (oggp == NULL) goto fail;\n  oggp->alloc_buf = NULL;\n  oggp->lacing = NULL;\n  oggp->pages = NULL;\n  oggp->user_buf = NULL;\n\n  oggp->buf_size = MAX_PAGE_SIZE;\n  oggp->lacing_size = 256;\n  oggp->pages_size = 10;\n\n  oggp->alloc_buf = malloc(oggp->buf_size + MAX_HEADER_SIZE);\n  oggp->lacing = malloc(oggp->lacing_size);\n  oggp->pages = malloc(oggp->pages_size * sizeof(oggp->pages[0]));\n  if (!oggp->alloc_buf || !oggp->lacing || !oggp->pages) goto fail;\n  oggp->buf = oggp->alloc_buf + MAX_HEADER_SIZE;\n\n  oggp->serialno = serialno;\n  oggp->buf_fill = 0;\n  oggp->buf_begin = 0;\n  oggp->lacing_fill = 0;\n  oggp->lacing_begin = 0;\n  oggp->pages_fill = 0;\n\n  oggp->is_eos = 0;\n  oggp->curr_granule = 0;\n  oggp->last_granule = 0;\n  oggp->pageno = 0;\n  oggp->muxing_delay = 0;\n  return oggp;\nfail:\n  if (oggp) {\n    if (oggp->lacing) free(oggp->lacing);\n    if (oggp->alloc_buf) free(oggp->alloc_buf);\n    if (oggp->pages) free(oggp->pages);\n    free(oggp);\n  }\n  return NULL;\n}\n\n/** Frees memory associated with an oggpacker object */\nvoid oggp_destroy(oggpacker *oggp) {\n  free(oggp->lacing);\n  free(oggp->alloc_buf);\n  free(oggp->pages);\n  free(oggp);\n}\n\n/** Sets the maximum muxing delay in granulepos units. Pages will be auto-flushed\n    to enforce the delay and to avoid continued pages if possible. */\nvoid oggp_set_muxing_delay(oggpacker *oggp, oggp_uint64 delay) {\n  oggp->muxing_delay = delay;\n}\n\nstatic void shift_buffer(oggpacker *oggp) {\n  size_t buf_shift;\n  size_t lacing_shift;\n  size_t i;\n  buf_shift = oggp->pages_fill ? oggp->pages[0].buf_pos : oggp->buf_begin;\n  lacing_shift = oggp->pages_fill ? oggp->pages[0].lacing_pos : oggp->lacing_begin;\n  if (4*lacing_shift > oggp->lacing_fill) {\n    memmove(&oggp->lacing[0], &oggp->lacing[lacing_shift], oggp->lacing_fill-lacing_shift);\n    for (i=0;i<oggp->pages_fill;i++) oggp->pages[i].lacing_pos -= lacing_shift;\n    oggp->lacing_fill -= lacing_shift;\n    oggp->lacing_begin -= lacing_shift;\n  }\n  if (4*buf_shift > oggp->buf_fill) {\n    memmove(&oggp->buf[0], &oggp->buf[buf_shift], oggp->buf_fill-buf_shift);\n    for (i=0;i<oggp->pages_fill;i++) oggp->pages[i].buf_pos -= buf_shift;\n    oggp->buf_fill -= buf_shift;\n    oggp->buf_begin -= buf_shift;\n  }\n}\n\n/** Get a buffer where to write the next packet. The buffer will have\n    size \"bytes\", but fewer bytes can be written. The buffer remains valid through\n    a call to oggp_close_page() or oggp_get_next_page(), but is invalidated by\n    another call to oggp_get_packet_buffer() or by a call to oggp_commit_packet(). */\nunsigned char *oggp_get_packet_buffer(oggpacker *oggp, oggp_int32 bytes) {\n  if (oggp->buf_fill + bytes > oggp->buf_size) {\n    shift_buffer(oggp);\n\n    /* If we didn't shift the buffer or if we did and there's still not enough room, make some more. */\n    if (oggp->buf_fill + bytes > oggp->buf_size) {\n      size_t newsize;\n      unsigned char *newbuf;\n      newsize = oggp->buf_fill + bytes + MAX_HEADER_SIZE;\n      /* Making sure we don't need to do that too often. */\n      newsize = newsize*3/2;\n      newbuf = realloc(oggp->alloc_buf, newsize);\n      if (newbuf != NULL) {\n        oggp->alloc_buf = newbuf;\n        oggp->buf_size = newsize;\n        oggp->buf = oggp->alloc_buf + MAX_HEADER_SIZE;\n      } else {\n        return NULL;\n      }\n    }\n  }\n  oggp->user_buf = &oggp->buf[oggp->buf_fill];\n  return oggp->user_buf;\n}\n\n/** Tells the oggpacker that the packet buffer obtained from\n    oggp_get_packet_buffer() has been filled and the number of bytes written\n    has to be no more than what was originally asked for. */\nint oggp_commit_packet(oggpacker *oggp, oggp_int32 bytes, oggp_uint64 granulepos, int eos) {\n  size_t i;\n  size_t nb_255s;\n  assert(oggp->user_buf != NULL);\n  nb_255s = bytes/255;\n  if (oggp->lacing_fill-oggp->lacing_begin+nb_255s+1 > 255 ||\n      (oggp->muxing_delay && granulepos - oggp->last_granule > oggp->muxing_delay)) {\n    oggp_flush_page(oggp);\n  }\n  assert(oggp->user_buf >= &oggp->buf[oggp->buf_fill]);\n  oggp->buf_fill += bytes;\n  if (oggp->lacing_fill + nb_255s + 1 > oggp->lacing_size) {\n    shift_buffer(oggp);\n\n    /* If we didn't shift the values or if we did and there's still not enough room, make some more. */\n    if (oggp->lacing_fill + nb_255s + 1 > oggp->lacing_size) {\n      size_t newsize;\n      unsigned char *newbuf;\n      newsize = oggp->lacing_fill + nb_255s + 1;\n      /* Making sure we don't need to do that too often. */\n      newsize = newsize*3/2;\n      newbuf = realloc(oggp->lacing, newsize);\n      if (newbuf != NULL) {\n        oggp->lacing = newbuf;\n        oggp->lacing_size = newsize;\n      } else {\n        return 1;\n      }\n    }\n  }\n  /* If we moved the buffer data, update the incoming packet location. */\n  if (oggp->user_buf > &oggp->buf[oggp->buf_fill]) {\n    memmove(&oggp->buf[oggp->buf_fill], oggp->user_buf, bytes);\n  }\n  for (i=0;i<nb_255s;i++) {\n    oggp->lacing[oggp->lacing_fill+i] = 255;\n  }\n  oggp->lacing[oggp->lacing_fill+nb_255s] = bytes - 255*nb_255s;\n  oggp->lacing_fill += nb_255s + 1;\n  oggp->curr_granule = granulepos;\n  oggp->is_eos = eos;\n  if (oggp->muxing_delay && granulepos - oggp->last_granule >= oggp->muxing_delay) {\n    oggp_flush_page(oggp);\n  }\n  return 0;\n}\n\n/** Create a page from the data written so far (and not yet part of a previous page).\n    If there is too much data for one page, all page continuations will be closed too. */\nint oggp_flush_page(oggpacker *oggp) {\n  oggp_page *p;\n  int cont = 0;\n  size_t nb_lacing;\n  if (oggp->lacing_fill == oggp->lacing_begin) {\n    return 1;\n  }\n  nb_lacing = oggp->lacing_fill - oggp->lacing_begin;\n  do {\n    if (oggp->pages_fill >= oggp->pages_size) {\n      size_t newsize;\n      oggp_page *newbuf;\n      /* Making sure we don't need to do that too often. */\n      newsize = 1 + oggp->pages_size*3/2;\n      newbuf = realloc(oggp->pages, newsize*sizeof(oggp_page));\n      if (newbuf != NULL) {\n        oggp->pages = newbuf;\n        oggp->pages_size = newsize;\n      } else {\n        assert(0);\n      }\n    }\n    p = &oggp->pages[oggp->pages_fill++];\n    p->granulepos = oggp->curr_granule;\n\n    p->lacing_pos = oggp->lacing_begin;\n    p->lacing_size = nb_lacing;\n    p->flags = cont;\n    p->buf_pos = oggp->buf_begin;\n    if (p->lacing_size > 255) {\n      size_t bytes=0;\n      int i;\n      for (i=0;i<255;i++) bytes += oggp->lacing[oggp->lacing_begin+1];\n      p->buf_size = bytes;\n      p->lacing_size = 255;\n      p->granulepos = -1;\n      cont = 1;\n    } else {\n      p->buf_size = oggp->buf_fill - oggp->buf_begin;\n      if (oggp->is_eos) p->flags |= 0x04;\n    }\n    nb_lacing -= p->lacing_size;\n    oggp->lacing_begin += p->lacing_size;\n    oggp->buf_begin += p->buf_size;\n    p->pageno = oggp->pageno++;\n    if (p->pageno == 0)\n      p->flags |= 0x02;\n  } while (nb_lacing>0);\n\n  oggp->last_granule = oggp->curr_granule;\n  return 0;\n}\n\n/** Get a pointer to the contents of the next available page. Pointer is\n    invalidated on the next call to oggp_get_next_page() or oggp_commit_packet(). */\nint oggp_get_next_page(oggpacker *oggp, unsigned char **page, oggp_int32 *bytes) {\n  oggp_page *p;\n  int i;\n  unsigned char *ptr;\n  size_t len;\n  int header_size;\n  oggp_uint64 granule_pos;\n  if (oggp->pages_fill == 0) {\n    *page = NULL;\n    *bytes = 0;\n    return 0;\n  }\n  p = &oggp->pages[0];\n  header_size = 27 + p->lacing_size;\n  ptr = &oggp->buf[p->buf_pos - header_size];\n  len = p->buf_size + header_size;\n  memcpy(&ptr[27], &oggp->lacing[p->lacing_pos], p->lacing_size);\n  memcpy(ptr, \"OggS\", 4);\n\n  /* stream structure version */\n  ptr[4]=0x00;\n\n  ptr[5]=0x00 | p->flags;\n\n  granule_pos = p->granulepos;\n  /* 64 bits of PCM position */\n  for(i=6;i<14;i++){\n    ptr[i]=(unsigned char)(granule_pos&0xff);\n    granule_pos>>=8;\n  }\n\n  /* 32 bits of stream serial number */\n  {\n    oggp_int32 serialno=oggp->serialno;\n    for(i=14;i<18;i++){\n      ptr[i]=(unsigned char)(serialno&0xff);\n      serialno>>=8;\n    }\n  }\n\n  {\n    oggp_int32 pageno=p->pageno;\n    for(i=18;i<22;i++){\n      ptr[i]=(unsigned char)(pageno&0xff);\n      pageno>>=8;\n    }\n  }\n\n  ptr[26] = p->lacing_size;\n\n  /* CRC is always last. */\n  ogg_page_checksum_set(ptr, len);\n\n  *page = ptr;\n  *bytes = len;\n  oggp->pages_fill--;\n  memmove(&oggp->pages[0], &oggp->pages[1], oggp->pages_fill*sizeof(oggp_page));\n  return 1;\n}\n\n/** Creates a new (chained) stream. This closes all outstanding pages. These\n    pages remain available with oggp_get_next_page(). */\nint oggp_chain(oggpacker *oggp, oggp_int32 serialno) {\n  oggp_flush_page(oggp);\n  oggp->serialno = serialno;\n  oggp->curr_granule = 0;\n  oggp->last_granule = 0;\n  oggp->is_eos = 0;\n  oggp->pageno = 0;\n  return 0;\n}\n"
  },
  {
    "path": "ThirdParty/libopusenc/src/ogg_packer.h",
    "content": "/* Copyright (c) 2017 Jean-Marc Valin */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#if !defined(_oggpacker_h)\n# define _oggpacker_h (1)\n\n\n# if defined(__cplusplus)\nextern \"C\" {\n# endif\n\ntypedef unsigned long long oggp_uint64;\ntypedef unsigned oggp_uint32;\ntypedef int oggp_int32;\n\ntypedef struct oggpacker oggpacker;\n\n/** Allocates an oggpacker object */\noggpacker *oggp_create(oggp_int32 serialno);\n\n/** Frees memory associated with an oggpacker object */\nvoid oggp_destroy(oggpacker *oggp);\n\n/** Sets the maximum muxing delay in granulepos units. Pages will be auto-flushed\n    to enforce the delay and to avoid continued pages if possible. */\nvoid oggp_set_muxing_delay(oggpacker *oggp, oggp_uint64 delay);\n\n/** Get a buffer where to write the next packet. The buffer will have\n    size \"bytes\", but fewer bytes can be written. The buffer remains valid through\n    a call to oggp_close_page() or oggp_get_next_page(), but is invalidated by\n    another call to oggp_get_packet_buffer() or by a call to oggp_commit_packet(). */\nunsigned char *oggp_get_packet_buffer(oggpacker *oggp, oggp_int32 bytes);\n\n/** Tells the oggpacker that the packet buffer obtained from\n    oggp_get_packet_buffer() has been filled and the number of bytes written\n    has to be no more than what was originally asked for. */\nint oggp_commit_packet(oggpacker *oggp, oggp_int32 bytes, oggp_uint64 granulepos, int eos);\n\n/** Create a page from the data written so far (and not yet part of a previous page).\n    If there is too much data for one page, then all page continuations will be closed too. */\nint oggp_flush_page(oggpacker *oggp);\n\n/** Get a pointer to the contents of the next available page. Pointer is\n    invalidated on the next call to oggp_get_next_page() or oggp_commit_packet(). */\nint oggp_get_next_page(oggpacker *oggp, unsigned char **page, oggp_int32 *bytes);\n\n/** Creates a new (chained) stream. This closes all outstanding pages. These\n    pages remain available with oggp_get_next_page(). */\nint oggp_chain(oggpacker *oggp, oggp_int32 serialno);\n\n# if defined(__cplusplus)\n}\n# endif\n\n#endif\n"
  },
  {
    "path": "ThirdParty/libopusenc/src/opus_header.c",
    "content": "/* Copyright (C)2012 Xiph.Org Foundation\n   File: opus_header.c\n\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR\n   CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifdef HAVE_CONFIG_H\n# include \"config.h\"\n#endif\n\n#include \"opus_header.h\"\n#include <string.h>\n#include <stdio.h>\n\n/* Header contents:\n  - \"OpusHead\" (64 bits)\n  - version number (8 bits)\n  - Channels C (8 bits)\n  - Pre-skip (16 bits)\n  - Sampling rate (32 bits)\n  - Gain in dB (16 bits, S7.8)\n  - Mapping (8 bits, 0=single stream (mono/stereo) 1=Vorbis mapping,\n             2..254: reserved, 255: multistream with no mapping)\n\n  - if (mapping != 0)\n     - N = total number of streams (8 bits)\n     - M = number of paired streams (8 bits)\n     - C times channel origin\n          - if (C<2*M)\n             - stream = byte/2\n             - if (byte&0x1 == 0)\n                 - left\n               else\n                 - right\n          - else\n             - stream = byte-M\n*/\n\ntypedef struct {\n   unsigned char *data;\n   int maxlen;\n   int pos;\n} Packet;\n\ntypedef struct {\n   const unsigned char *data;\n   int maxlen;\n   int pos;\n} ROPacket;\n\nstatic int write_uint32(Packet *p, opus_uint32 val)\n{\n   if (p->pos>p->maxlen-4)\n      return 0;\n   p->data[p->pos  ] = (val    ) & 0xFF;\n   p->data[p->pos+1] = (val>> 8) & 0xFF;\n   p->data[p->pos+2] = (val>>16) & 0xFF;\n   p->data[p->pos+3] = (val>>24) & 0xFF;\n   p->pos += 4;\n   return 1;\n}\n\nstatic int write_uint16(Packet *p, opus_uint16 val)\n{\n   if (p->pos>p->maxlen-2)\n      return 0;\n   p->data[p->pos  ] = (val    ) & 0xFF;\n   p->data[p->pos+1] = (val>> 8) & 0xFF;\n   p->pos += 2;\n   return 1;\n}\n\nstatic int write_chars(Packet *p, const unsigned char *str, int nb_chars)\n{\n   int i;\n   if (p->pos>p->maxlen-nb_chars)\n      return 0;\n   for (i=0;i<nb_chars;i++)\n      p->data[p->pos++] = str[i];\n   return 1;\n}\n\nint opus_header_to_packet(const OpusHeader *h, unsigned char *packet, int len)\n{\n   int i;\n   Packet p;\n   unsigned char ch;\n\n   p.data = packet;\n   p.maxlen = len;\n   p.pos = 0;\n   if (len<19)return 0;\n   if (!write_chars(&p, (const unsigned char*)\"OpusHead\", 8))\n      return 0;\n   /* Version is 1 */\n   ch = 1;\n   if (!write_chars(&p, &ch, 1))\n      return 0;\n\n   ch = h->channels;\n   if (!write_chars(&p, &ch, 1))\n      return 0;\n\n   if (!write_uint16(&p, h->preskip))\n      return 0;\n\n   if (!write_uint32(&p, h->input_sample_rate))\n      return 0;\n\n   if (!write_uint16(&p, h->gain))\n      return 0;\n\n   ch = h->channel_mapping;\n   if (!write_chars(&p, &ch, 1))\n      return 0;\n\n   if (h->channel_mapping != 0)\n   {\n      ch = h->nb_streams;\n      if (!write_chars(&p, &ch, 1))\n         return 0;\n\n      ch = h->nb_coupled;\n      if (!write_chars(&p, &ch, 1))\n         return 0;\n\n      /* Multi-stream support */\n      for (i=0;i<h->channels;i++)\n      {\n         if (!write_chars(&p, &h->stream_map[i], 1))\n            return 0;\n      }\n   }\n\n   return p.pos;\n}\n\n/*\n Comments will be stored in the Vorbis style.\n It is described in the \"Structure\" section of\n    http://www.xiph.org/ogg/vorbis/doc/v-comment.html\n\n However, Opus and other non-vorbis formats omit the \"framing_bit\".\n\nThe comment header is decoded as follows:\n  1) [vendor_length] = read an unsigned integer of 32 bits\n  2) [vendor_string] = read a UTF-8 vector as [vendor_length] octets\n  3) [user_comment_list_length] = read an unsigned integer of 32 bits\n  4) iterate [user_comment_list_length] times {\n     5) [length] = read an unsigned integer of 32 bits\n     6) this iteration's user comment = read a UTF-8 vector as [length] octets\n     }\n  7) done.\n*/\n\n#define readint(buf, base) (((buf[base+3]<<24)&0xff000000)| \\\n                           ((buf[base+2]<<16)&0xff0000)| \\\n                           ((buf[base+1]<<8)&0xff00)| \\\n                           (buf[base]&0xff))\n#define writeint(buf, base, val) do{ buf[base+3]=((val)>>24)&0xff; \\\n                                     buf[base+2]=((val)>>16)&0xff; \\\n                                     buf[base+1]=((val)>>8)&0xff; \\\n                                     buf[base]=(val)&0xff; \\\n                                 }while(0)\n\nvoid comment_init(char **comments, int* length, const char *vendor_string)\n{\n  /*The 'vendor' field should be the actual encoding library used.*/\n  int vendor_length=strlen(vendor_string);\n  int user_comment_list_length=0;\n  int len=8+4+vendor_length+4;\n  char *p=(char*)malloc(len);\n  if (p == NULL) return;\n  memcpy(p, \"OpusTags\", 8);\n  writeint(p, 8, vendor_length);\n  memcpy(p+12, vendor_string, vendor_length);\n  writeint(p, 12+vendor_length, user_comment_list_length);\n  *length=len;\n  *comments=p;\n}\n\nint comment_add(char **comments, int* length, const char *tag, const char *val)\n{\n  char* p=*comments;\n  int vendor_length=readint(p, 8);\n  int user_comment_list_length=readint(p, 8+4+vendor_length);\n  int tag_len=(tag?strlen(tag)+1:0);\n  int val_len=strlen(val);\n  int len=(*length)+4+tag_len+val_len;\n\n  p=(char*)realloc(p, len);\n  if (p == NULL) return 1;\n\n  writeint(p, *length, tag_len+val_len);      /* length of comment */\n  if(tag){\n    memcpy(p+*length+4, tag, tag_len);        /* comment tag */\n    (p+*length+4)[tag_len-1] = '=';           /* separator */\n  }\n  memcpy(p+*length+4+tag_len, val, val_len);  /* comment */\n  writeint(p, 8+4+vendor_length, user_comment_list_length+1);\n  *comments=p;\n  *length=len;\n  return 0;\n}\n\nvoid comment_pad(char **comments, int* length, int amount)\n{\n  if(amount>0){\n    int i;\n    int newlen;\n    char* p=*comments;\n    /*Make sure there is at least amount worth of padding free, and\n       round up to the maximum that fits in the current ogg segments.*/\n    newlen=(*length+amount+255)/255*255-1;\n    p=realloc(p,newlen);\n    if (p == NULL) return;\n    for(i=*length;i<newlen;i++)p[i]=0;\n    *comments=p;\n    *length=newlen;\n  }\n}\n\nint comment_replace_vendor_string(char **comments, int* length, const char *vendor_string)\n{\n  char* p=*comments;\n  int vendor_length;\n  int newlen;\n  int newvendor_length;\n  vendor_length=readint(p, 8);\n  newvendor_length=strlen(vendor_string);\n  newlen=*length+newvendor_length-vendor_length;\n  p=realloc(p, newlen);\n  if (p == NULL) return 1;\n  writeint(p, 8, newvendor_length);\n  memmove(p+12+newvendor_length, p+12+vendor_length, newlen-12-newvendor_length);\n  memcpy(p+12, vendor_string, newvendor_length);\n  *comments=p;\n  *length=newlen;\n  return 0;\n}\n#undef readint\n#undef writeint\n\n"
  },
  {
    "path": "ThirdParty/libopusenc/src/opus_header.h",
    "content": "/* Copyright (C)2012 Xiph.Org Foundation\n   File: opus_header.h\n\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR\n   CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifndef OPUS_HEADER_H\n#define OPUS_HEADER_H\n\n#include <stdlib.h>\n#include <opus.h>\n\ntypedef struct {\n   int version;\n   int channels; /* Number of channels: 1..255 */\n   int preskip;\n   opus_uint32 input_sample_rate;\n   opus_int32 gain; /* in dB S7.8 should be zero whenever possible */\n   int channel_mapping;\n   /* The rest is only used if channel_mapping != 0 */\n   int nb_streams;\n   int nb_coupled;\n   unsigned char stream_map[255];\n} OpusHeader;\n\nint opus_header_to_packet(const OpusHeader *h, unsigned char *packet, int len);\n\nvoid comment_init(char **comments, int* length, const char *vendor_string);\n\nint comment_add(char **comments, int* length, const char *tag, const char *val);\n\nvoid comment_pad(char **comments, int* length, int amount);\n\nint comment_replace_vendor_string(char **comments, int* length, const char *vendor_string);\n\n#endif\n"
  },
  {
    "path": "ThirdParty/libopusenc/src/opusenc.c",
    "content": "/* Copyright (C)2002-2017 Jean-Marc Valin\n   Copyright (C)2007-2013 Xiph.Org Foundation\n   Copyright (C)2008-2013 Gregory Maxwell\n   File: opusenc.c\n\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include <stdarg.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <string.h>\n#include <assert.h>\n#include <opus_multistream.h>\n#include \"opusenc.h\"\n#include \"opus_header.h\"\n#include \"speex_resampler.h\"\n#include \"picture.h\"\n#include \"ogg_packer.h\"\n\n/* Bump this when we change the ABI. */\n#define OPE_ABI_VERSION 0\n\n#define LPC_PADDING 120\n#define LPC_ORDER 24\n#define LPC_INPUT 480\n/* Make the following constant always equal to 2*cos(M_PI/LPC_PADDING) */\n#define LPC_GOERTZEL_CONST 1.99931465f\n\n/* Allow up to 2 seconds for delayed decision. */\n#define MAX_LOOKAHEAD 96000\n/* We can't have a circular buffer (because of delayed decision), so let's not copy too often. */\n#define BUFFER_EXTRA 24000\n\n#define BUFFER_SAMPLES (MAX_LOOKAHEAD + BUFFER_EXTRA)\n\n#define MIN(a,b) ((a) < (b) ? (a) : (b))\n#define MAX(a,b) ((a) > (b) ? (a) : (b))\n\n#ifdef _MSC_VER\n# if (_MSC_VER < 1900)\n#  define snprintf _snprintf\n# endif\n#endif\n\nstruct StdioObject {\n  FILE *file;\n};\n\nstruct OggOpusComments {\n  char *comment;\n  int comment_length;\n  int seen_file_icons;\n};\n\n/* Create a new comments object. The vendor string is optional. */\nOggOpusComments *ope_comments_create() {\n  OggOpusComments *c;\n  const char *libopus_str;\n  char vendor_str[1024];\n  c = malloc(sizeof(*c));\n  if (c == NULL) return NULL;\n  libopus_str = opus_get_version_string();\n  snprintf(vendor_str, sizeof(vendor_str), \"%s, %s version %s\", libopus_str, PACKAGE_NAME, PACKAGE_VERSION);\n  comment_init(&c->comment, &c->comment_length, vendor_str);\n  if (c->comment == NULL) {\n    free(c);\n    return NULL;\n  } else {\n    return c;\n  }\n}\n\n/* Create a deep copy of a comments object. */\nOggOpusComments *ope_comments_copy(OggOpusComments *comments) {\n  OggOpusComments *c;\n  c = malloc(sizeof(*c));\n  if (c == NULL) return NULL;\n  memcpy(c, comments, sizeof(*c));\n  c->comment = malloc(comments->comment_length);\n  if (c->comment == NULL) {\n    free(c);\n    return NULL;\n  } else {\n    memcpy(c->comment, comments->comment, comments->comment_length);\n    return c;\n  }\n}\n\n/* Destroys a comments object. */\nvoid ope_comments_destroy(OggOpusComments *comments){\n  free(comments->comment);\n  free(comments);\n}\n\n/* Add a comment. */\nint ope_comments_add(OggOpusComments *comments, const char *tag, const char *val) {\n  if (tag == NULL || val == NULL) return OPE_BAD_ARG;\n  if (strchr(tag, '=')) return OPE_BAD_ARG;\n  if (comment_add(&comments->comment, &comments->comment_length, tag, val)) return OPE_ALLOC_FAIL;\n  return OPE_OK;\n}\n\n/* Add a comment. */\nint ope_comments_add_string(OggOpusComments *comments, const char *tag_and_val) {\n  if (!strchr(tag_and_val, '=')) return OPE_BAD_ARG;\n  if (comment_add(&comments->comment, &comments->comment_length, tag_and_val, NULL)) return OPE_ALLOC_FAIL;\n  return OPE_OK;\n}\n\nint ope_comments_add_picture(OggOpusComments *comments, const char *filename, int picture_type, const char *description) {\n  char *picture_data;\n  int err;\n  picture_data = parse_picture_specification(filename, picture_type, description, &err, &comments->seen_file_icons);\n  if (picture_data == NULL || err != OPE_OK){\n    return err;\n  }\n  comment_add(&comments->comment, &comments->comment_length, \"METADATA_BLOCK_PICTURE\", picture_data);\n  free(picture_data);\n  return OPE_OK;\n}\n\n\ntypedef struct EncStream EncStream;\n\nstruct EncStream {\n  void *user_data;\n  int serialno_is_set;\n  int serialno;\n  int stream_is_init;\n  int packetno;\n  char *comment;\n  int comment_length;\n  int seen_file_icons;\n  int close_at_end;\n  int header_is_frozen;\n  opus_int64 end_granule;\n  opus_int64 granule_offset;\n  EncStream *next;\n};\n\nstruct OggOpusEnc {\n  OpusMSEncoder *st;\n  oggpacker *oggp;\n  int unrecoverable;\n  int pull_api;\n  int rate;\n  int channels;\n  float *buffer;\n  int buffer_start;\n  int buffer_end;\n  SpeexResamplerState *re;\n  int frame_size;\n  int decision_delay;\n  int max_ogg_delay;\n  int global_granule_offset;\n  opus_int64 curr_granule;\n  opus_int64 write_granule;\n  opus_int64 last_page_granule;\n  float *lpc_buffer;\n  unsigned char *chaining_keyframe;\n  int chaining_keyframe_length;\n  OpusEncCallbacks callbacks;\n  ope_packet_func packet_callback;\n  void *packet_callback_data;\n  OpusHeader header;\n  int comment_padding;\n  EncStream *streams;\n  EncStream *last_stream;\n};\n\nstatic void output_pages(OggOpusEnc *enc) {\n  unsigned char *page;\n  int len;\n  while (oggp_get_next_page(enc->oggp, &page, &len)) {\n    enc->callbacks.write(enc->streams->user_data, page, len);\n  }\n}\nstatic void oe_flush_page(OggOpusEnc *enc) {\n  oggp_flush_page(enc->oggp);\n  if (!enc->pull_api) output_pages(enc);\n}\n\nint stdio_write(void *user_data, const unsigned char *ptr, opus_int32 len) {\n  struct StdioObject *obj = (struct StdioObject*)user_data;\n  return fwrite(ptr, 1, len, obj->file) != (size_t)len;\n}\n\nint stdio_close(void *user_data) {\n  struct StdioObject *obj = (struct StdioObject*)user_data;\n  int ret = 0;\n  if (obj->file) ret = fclose(obj->file);\n  free(obj);\n  return ret;\n}\n\nstatic const OpusEncCallbacks stdio_callbacks = {\n  stdio_write,\n  stdio_close\n};\n\n/* Create a new OggOpus file. */\nOggOpusEnc *ope_encoder_create_file(const char *path, OggOpusComments *comments, opus_int32 rate, int channels, int family, int *error) {\n  OggOpusEnc *enc;\n  struct StdioObject *obj;\n  obj = malloc(sizeof(*obj));\n  enc = ope_encoder_create_callbacks(&stdio_callbacks, obj, comments, rate, channels, family, error);\n  if (enc == NULL || (error && *error)) {\n    return NULL;\n  }\n  obj->file = fopen(path, \"wb\");\n  if (!obj->file) {\n    if (error) *error = OPE_CANNOT_OPEN;\n    ope_encoder_destroy(enc);\n    return NULL;\n  }\n  return enc;\n}\n\nEncStream *stream_create(OggOpusComments *comments) {\n  EncStream *stream;\n  stream = malloc(sizeof(*stream));\n  if (!stream) return NULL;\n  stream->next = NULL;\n  stream->close_at_end = 1;\n  stream->serialno_is_set = 0;\n  stream->stream_is_init = 0;\n  stream->header_is_frozen = 0;\n  stream->granule_offset = 0;\n  stream->comment = malloc(comments->comment_length);\n  if (stream->comment == NULL) goto fail;\n  memcpy(stream->comment, comments->comment, comments->comment_length);\n  stream->comment_length = comments->comment_length;\n  stream->seen_file_icons = comments->seen_file_icons;\n  return stream;\nfail:\n  if (stream->comment) free(stream->comment);\n  free(stream);\n  return NULL;\n}\n\nstatic void stream_destroy(EncStream *stream) {\n  if (stream->comment) free(stream->comment);\n  free(stream);\n}\n\n/* Create a new OggOpus file (callback-based). */\nOggOpusEnc *ope_encoder_create_callbacks(const OpusEncCallbacks *callbacks, void *user_data,\n    OggOpusComments *comments, opus_int32 rate, int channels, int family, int *error) {\n  OpusMSEncoder *st=NULL;\n  OggOpusEnc *enc=NULL;\n  int ret;\n  if (family != 0 && family != 1 && family != 255) {\n    if (error) *error = OPE_UNIMPLEMENTED;\n    return NULL;\n  }\n  if (channels <= 0 || channels > 255) {\n    if (error) *error = OPE_BAD_ARG;\n    return NULL;\n  }\n  if (rate <= 0) {\n    if (error) *error = OPE_BAD_ARG;\n    return NULL;\n  }\n\n  if ( (enc = malloc(sizeof(*enc))) == NULL) goto fail;\n  enc->streams = NULL;\n  if ( (enc->streams = stream_create(comments)) == NULL) goto fail;\n  enc->streams->next = NULL;\n  enc->last_stream = enc->streams;\n  enc->oggp = NULL;\n  enc->unrecoverable = 0;\n  enc->pull_api = 0;\n  enc->packet_callback = NULL;\n  enc->rate = rate;\n  enc->channels = channels;\n  enc->frame_size = 960;\n  enc->decision_delay = 96000;\n  enc->max_ogg_delay = 48000;\n  enc->chaining_keyframe = NULL;\n  enc->chaining_keyframe_length = -1;\n  enc->comment_padding = 512;\n  enc->header.channels=channels;\n  enc->header.channel_mapping=family;\n  enc->header.input_sample_rate=rate;\n  enc->header.gain=0;\n  st=opus_multistream_surround_encoder_create(48000, channels, enc->header.channel_mapping,\n      &enc->header.nb_streams, &enc->header.nb_coupled,\n      enc->header.stream_map, OPUS_APPLICATION_AUDIO, &ret);\n  if (! (ret == OPUS_OK && st != NULL) ) {\n    goto fail;\n  }\n  if (rate != 48000) {\n    enc->re = speex_resampler_init(channels, rate, 48000, 5, NULL);\n    if (enc->re == NULL) goto fail;\n    speex_resampler_skip_zeros(enc->re);\n  } else {\n    enc->re = NULL;\n  }\n  opus_multistream_encoder_ctl(st, OPUS_SET_EXPERT_FRAME_DURATION(OPUS_FRAMESIZE_20_MS));\n  {\n    opus_int32 tmp;\n    int ret;\n    ret = opus_multistream_encoder_ctl(st, OPUS_GET_LOOKAHEAD(&tmp));\n    if (ret == OPUS_OK) enc->header.preskip = tmp;\n    else enc->header.preskip = 0;\n    enc->global_granule_offset = enc->header.preskip;\n  }\n  enc->curr_granule = 0;\n  enc->write_granule = 0;\n  enc->last_page_granule = 0;\n  if ( (enc->buffer = malloc(sizeof(*enc->buffer)*BUFFER_SAMPLES*channels)) == NULL) goto fail;\n  if (rate != 48000) {\n    /* Allocate an extra LPC_PADDING samples so we can do the padding in-place. */\n    if ( (enc->lpc_buffer = malloc(sizeof(*enc->lpc_buffer)*(LPC_INPUT+LPC_PADDING)*channels)) == NULL) goto fail;\n    memset(enc->lpc_buffer, 0, sizeof(*enc->lpc_buffer)*LPC_INPUT*channels);\n  } else {\n    enc->lpc_buffer = NULL;\n  }\n  enc->buffer_start = enc->buffer_end = 0;\n  enc->st = st;\n  if (callbacks != NULL)\n  {\n    enc->callbacks = *callbacks;\n  }\n  enc->streams->user_data = user_data;\n  if (error) *error = OPE_OK;\n  return enc;\nfail:\n  if (enc) {\n    free(enc);\n    if (enc->buffer) free(enc->buffer);\n    if (enc->streams) stream_destroy(enc->streams);\n    if (enc->lpc_buffer) free(enc->lpc_buffer);\n  }\n  if (st) {\n    opus_multistream_encoder_destroy(st);\n  }\n  if (error) *error = OPE_ALLOC_FAIL;\n  return NULL;\n}\n\n/* Create a new OggOpus stream, pulling one page at a time. */\nOPE_EXPORT OggOpusEnc *ope_encoder_create_pull(OggOpusComments *comments, opus_int32 rate, int channels, int family, int *error) {\n  OggOpusEnc *enc = ope_encoder_create_callbacks(NULL, NULL, comments, rate, channels, family, error);\n  if (enc) enc->pull_api = 1;\n  return enc;\n}\n\nstatic void init_stream(OggOpusEnc *enc) {\n  assert(!enc->streams->stream_is_init);\n  if (!enc->streams->serialno_is_set) {\n    enc->streams->serialno = rand();\n  }\n\n  if (enc->oggp != NULL) oggp_chain(enc->oggp, enc->streams->serialno);\n  else {\n    enc->oggp = oggp_create(enc->streams->serialno);\n    if (enc->oggp == NULL) {\n      enc->unrecoverable = 1;\n      return;\n    }\n    oggp_set_muxing_delay(enc->oggp, enc->max_ogg_delay);\n  }\n  comment_pad(&enc->streams->comment, &enc->streams->comment_length, enc->comment_padding);\n\n  /*Write header*/\n  {\n    int packet_size;\n    unsigned char *p;\n    p = oggp_get_packet_buffer(enc->oggp, 276);\n    packet_size = opus_header_to_packet(&enc->header, p, 276);\n    if (enc->packet_callback) enc->packet_callback(enc->packet_callback_data, p, packet_size, 0);\n    oggp_commit_packet(enc->oggp, packet_size, 0, 0);\n    oe_flush_page(enc);\n    p = oggp_get_packet_buffer(enc->oggp, enc->streams->comment_length);\n    memcpy(p, enc->streams->comment, enc->streams->comment_length);\n    if (enc->packet_callback) enc->packet_callback(enc->packet_callback_data, p, enc->streams->comment_length, 0);\n    oggp_commit_packet(enc->oggp, enc->streams->comment_length, 0, 0);\n    oe_flush_page(enc);\n  }\n  enc->streams->stream_is_init = 1;\n  enc->streams->packetno = 2;\n}\n\nstatic void shift_buffer(OggOpusEnc *enc) {\n  /* Leaving enough in the buffer to do LPC extension if needed. */\n  if (enc->buffer_start > LPC_INPUT) {\n    memmove(&enc->buffer[0], &enc->buffer[enc->channels*(enc->buffer_start-LPC_INPUT)],\n            enc->channels*(enc->buffer_end-enc->buffer_start+LPC_INPUT)*sizeof(*enc->buffer));\n    enc->buffer_end -= enc->buffer_start-LPC_INPUT;\n    enc->buffer_start = LPC_INPUT;\n  }\n}\n\nstatic void encode_buffer(OggOpusEnc *enc) {\n  opus_int32 max_packet_size;\n  /* Round up when converting the granule pos because the decoder will round down. */\n  opus_int64 end_granule48k = (enc->streams->end_granule*48000 + enc->rate - 1)/enc->rate + enc->global_granule_offset;\n  max_packet_size = (1277*6+2)*enc->header.nb_streams;\n  while (enc->buffer_end-enc->buffer_start > enc->frame_size + enc->decision_delay) {\n    int cont;\n    int e_o_s;\n    opus_int32 pred;\n    int nbBytes;\n    unsigned char *packet;\n    unsigned char *packet_copy = NULL;\n    int is_keyframe=0;\n    if (enc->unrecoverable) return;\n    opus_multistream_encoder_ctl(enc->st, OPUS_GET_PREDICTION_DISABLED(&pred));\n    /* FIXME: a frame that follows a keyframe generally doesn't need to be a keyframe\n       unless there's two consecutive stream boundaries. */\n    if (enc->curr_granule + 2*enc->frame_size>= end_granule48k && enc->streams->next) {\n      opus_multistream_encoder_ctl(enc->st, OPUS_SET_PREDICTION_DISABLED(1));\n      is_keyframe = 1;\n    }\n    packet = oggp_get_packet_buffer(enc->oggp, max_packet_size);\n    nbBytes = opus_multistream_encode_float(enc->st, &enc->buffer[enc->channels*enc->buffer_start],\n        enc->buffer_end-enc->buffer_start, packet, max_packet_size);\n    if (nbBytes < 0) {\n      /* Anything better we can do here? */\n      enc->unrecoverable = 1;\n      return;\n    }\n    opus_multistream_encoder_ctl(enc->st, OPUS_SET_PREDICTION_DISABLED(pred));\n    assert(nbBytes > 0);\n    enc->curr_granule += enc->frame_size;\n    do {\n      opus_int64 granulepos;\n      granulepos=enc->curr_granule-enc->streams->granule_offset;\n      e_o_s=enc->curr_granule >= end_granule48k;\n      cont = 0;\n      if (e_o_s) granulepos=end_granule48k-enc->streams->granule_offset;\n      if (packet_copy != NULL) {\n        packet = oggp_get_packet_buffer(enc->oggp, max_packet_size);\n        memcpy(packet, packet_copy, nbBytes);\n      }\n      if (enc->packet_callback) enc->packet_callback(enc->packet_callback_data, packet, nbBytes, 0);\n      if ((e_o_s || is_keyframe) && packet_copy == NULL) {\n        packet_copy = malloc(nbBytes);\n        if (packet_copy == NULL) {\n          /* Can't recover from allocation failing here. */\n          enc->unrecoverable = 1;\n          return;\n        }\n        memcpy(packet_copy, packet, nbBytes);\n      }\n      oggp_commit_packet(enc->oggp, nbBytes, granulepos, e_o_s);\n      if (e_o_s) oe_flush_page(enc);\n      else if (!enc->pull_api) output_pages(enc);\n      if (e_o_s) {\n        EncStream *tmp;\n        tmp = enc->streams->next;\n        if (enc->streams->close_at_end && !enc->pull_api) enc->callbacks.close(enc->streams->user_data);\n        stream_destroy(enc->streams);\n        enc->streams = tmp;\n        if (!tmp) enc->last_stream = NULL;\n        if (enc->last_stream == NULL) {\n          if (packet_copy) free(packet_copy);\n          return;\n        }\n        /* We're done with this stream, start the next one. */\n        enc->header.preskip = end_granule48k + enc->frame_size - enc->curr_granule;\n        enc->streams->granule_offset = enc->curr_granule - enc->frame_size;\n        if (enc->chaining_keyframe) {\n          enc->header.preskip += enc->frame_size;\n          enc->streams->granule_offset -= enc->frame_size;\n        }\n        init_stream(enc);\n        if (enc->chaining_keyframe) {\n          unsigned char *p;\n          opus_int64 granulepos2=enc->curr_granule - enc->streams->granule_offset - enc->frame_size;\n          p = oggp_get_packet_buffer(enc->oggp, enc->chaining_keyframe_length);\n          memcpy(p, enc->chaining_keyframe, enc->chaining_keyframe_length);\n          if (enc->packet_callback) enc->packet_callback(enc->packet_callback_data, enc->chaining_keyframe, enc->chaining_keyframe_length, 0);\n          oggp_commit_packet(enc->oggp, enc->chaining_keyframe_length, granulepos2, 0);\n        }\n        end_granule48k = (enc->streams->end_granule*48000 + enc->rate - 1)/enc->rate + enc->global_granule_offset;\n        cont = 1;\n      }\n    } while (cont);\n    if (enc->chaining_keyframe) free(enc->chaining_keyframe);\n    if (is_keyframe) {\n      enc->chaining_keyframe_length = nbBytes;\n      enc->chaining_keyframe = packet_copy;\n      packet_copy = NULL;\n    } else {\n      enc->chaining_keyframe = NULL;\n      enc->chaining_keyframe_length = -1;\n    }\n    if (packet_copy) free(packet_copy);\n    enc->buffer_start += enc->frame_size;\n  }\n  /* If we've reached the end of the buffer, move everything back to the front. */\n  if (enc->buffer_end == BUFFER_SAMPLES) {\n    shift_buffer(enc);\n  }\n  /* This function must never leave the buffer full. */\n  assert(enc->buffer_end < BUFFER_SAMPLES);\n}\n\n/* Add/encode any number of float samples to the file. */\nint ope_encoder_write_float(OggOpusEnc *enc, const float *pcm, int samples_per_channel) {\n  int channels = enc->channels;\n  if (enc->unrecoverable) return OPE_UNRECOVERABLE;\n  enc->last_stream->header_is_frozen = 1;\n  if (!enc->streams->stream_is_init) init_stream(enc);\n  if (samples_per_channel < 0) return OPE_BAD_ARG;\n  enc->write_granule += samples_per_channel;\n  enc->last_stream->end_granule = enc->write_granule;\n  if (enc->lpc_buffer) {\n    int i;\n    if (samples_per_channel < LPC_INPUT) {\n      for (i=0;i<(LPC_INPUT-samples_per_channel)*channels;i++) enc->lpc_buffer[i] = enc->lpc_buffer[samples_per_channel*channels + i];\n      for (i=0;i<samples_per_channel*channels;i++) enc->lpc_buffer[(LPC_INPUT-samples_per_channel)*channels] = pcm[i];\n    } else {\n      for (i=0;i<LPC_INPUT*channels;i++) enc->lpc_buffer[i] = pcm[(samples_per_channel-LPC_INPUT)*channels + i];\n    }\n  }\n  do {\n    int i;\n    spx_uint32_t in_samples, out_samples;\n    out_samples = BUFFER_SAMPLES-enc->buffer_end;\n    if (enc->re != NULL) {\n      in_samples = samples_per_channel;\n      speex_resampler_process_interleaved_float(enc->re, pcm, &in_samples, &enc->buffer[channels*enc->buffer_end], &out_samples);\n    } else {\n      int curr;\n      curr = MIN((spx_uint32_t)samples_per_channel, out_samples);\n      for (i=0;i<channels*curr;i++) {\n      enc->buffer[channels*enc->buffer_end+i] = pcm[i];\n      }\n      in_samples = out_samples = curr;\n    }\n    enc->buffer_end += out_samples;\n    pcm += in_samples*channels;\n    samples_per_channel -= in_samples;\n    encode_buffer(enc);\n    if (enc->unrecoverable) return OPE_UNRECOVERABLE;\n  } while (samples_per_channel > 0);\n  return OPE_OK;\n}\n\n#define CONVERT_BUFFER 4096\n\n/* Add/encode any number of int16 samples to the file. */\nint ope_encoder_write(OggOpusEnc *enc, const opus_int16 *pcm, int samples_per_channel) {\n  int channels = enc->channels;\n  if (enc->unrecoverable) return OPE_UNRECOVERABLE;\n  enc->last_stream->header_is_frozen = 1;\n  if (!enc->streams->stream_is_init) init_stream(enc);\n  if (samples_per_channel < 0) return OPE_BAD_ARG;\n  enc->write_granule += samples_per_channel;\n  enc->last_stream->end_granule = enc->write_granule;\n  if (enc->lpc_buffer) {\n    int i;\n    if (samples_per_channel < LPC_INPUT) {\n      for (i=0;i<(LPC_INPUT-samples_per_channel)*channels;i++) enc->lpc_buffer[i] = enc->lpc_buffer[samples_per_channel*channels + i];\n      for (i=0;i<samples_per_channel*channels;i++) enc->lpc_buffer[(LPC_INPUT-samples_per_channel)*channels + i] = (1.f/32768)*pcm[i];\n    } else {\n      for (i=0;i<LPC_INPUT*channels;i++) enc->lpc_buffer[i] = (1.f/32768)*pcm[(samples_per_channel-LPC_INPUT)*channels + i];\n    }\n  }\n  do {\n    int i;\n    spx_uint32_t in_samples, out_samples;\n    out_samples = BUFFER_SAMPLES-enc->buffer_end;\n    if (enc->re != NULL) {\n      float buf[CONVERT_BUFFER];\n      in_samples = MIN(CONVERT_BUFFER/channels, samples_per_channel);\n      for (i=0;i<channels*(int)in_samples;i++) {\n        buf[i] = (1.f/32768)*pcm[i];\n      }\n      speex_resampler_process_interleaved_float(enc->re, buf, &in_samples, &enc->buffer[channels*enc->buffer_end], &out_samples);\n    } else {\n      int curr;\n      curr = MIN((spx_uint32_t)samples_per_channel, out_samples);\n      for (i=0;i<channels*curr;i++) {\n        enc->buffer[channels*enc->buffer_end+i] = (1.f/32768)*pcm[i];\n      }\n      in_samples = out_samples = curr;\n    }\n    enc->buffer_end += out_samples;\n    pcm += in_samples*channels;\n    samples_per_channel -= in_samples;\n    encode_buffer(enc);\n    if (enc->unrecoverable) return OPE_UNRECOVERABLE;\n  } while (samples_per_channel > 0);\n  return OPE_OK;\n}\n\n/* Get the next page from the stream. Returns 1 if there is a page available, 0 if not. */\nOPE_EXPORT int ope_encoder_get_page(OggOpusEnc *enc, unsigned char **page, opus_int32 *len, int flush) {\n  if (enc->unrecoverable) return OPE_UNRECOVERABLE;\n  if (!enc->pull_api) return 0;\n  else {\n    if (flush) oggp_flush_page(enc->oggp);\n    return oggp_get_next_page(enc->oggp, page, len);\n  }\n}\n\nstatic void extend_signal(float *x, int before, int after, int channels);\n\nint ope_encoder_drain(OggOpusEnc *enc) {\n  int pad_samples;\n  int resampler_drain = 0;\n  if (enc->unrecoverable) return OPE_UNRECOVERABLE;\n  /* Check if it's already been drained. */\n  if (enc->streams == NULL) return OPE_TOO_LATE;\n  if (enc->re) resampler_drain = speex_resampler_get_output_latency(enc->re);\n  pad_samples = MAX(LPC_PADDING, enc->global_granule_offset + enc->frame_size + resampler_drain);\n  if (!enc->streams->stream_is_init) init_stream(enc);\n  shift_buffer(enc);\n  assert(enc->buffer_end + pad_samples <= BUFFER_SAMPLES);\n  memset(&enc->buffer[enc->channels*enc->buffer_end], 0, pad_samples*enc->channels*sizeof(enc->buffer[0]));\n  if (enc->re) {\n    spx_uint32_t in_samples, out_samples;\n    extend_signal(&enc->lpc_buffer[LPC_INPUT*enc->channels], LPC_INPUT, LPC_PADDING, enc->channels);\n    do {\n      in_samples = LPC_PADDING;\n      out_samples = pad_samples;\n      speex_resampler_process_interleaved_float(enc->re, &enc->lpc_buffer[LPC_INPUT*enc->channels], &in_samples, &enc->buffer[enc->channels*enc->buffer_end], &out_samples);\n      enc->buffer_end += out_samples;\n      pad_samples -= out_samples;\n      /* If we don't have enough padding, zero all zeros and repeat. */\n      memset(&enc->lpc_buffer[LPC_INPUT*enc->channels], 0, LPC_PADDING*enc->channels*sizeof(enc->lpc_buffer[0]));\n    } while (pad_samples > 0);\n  } else {\n    extend_signal(&enc->buffer[enc->channels*enc->buffer_end], enc->buffer_end, LPC_PADDING, enc->channels);\n    enc->buffer_end += pad_samples;\n  }\n  enc->decision_delay = 0;\n  assert(enc->buffer_end <= BUFFER_SAMPLES);\n  encode_buffer(enc);\n  if (enc->unrecoverable) return OPE_UNRECOVERABLE;\n  /* Draining should have called all the streams to complete. */\n  assert(enc->streams == NULL);\n  return OPE_OK;\n}\n\nvoid ope_encoder_destroy(OggOpusEnc *enc) {\n  EncStream *stream;\n  stream = enc->streams;\n  while (stream != NULL) {\n    EncStream *tmp = stream;\n    stream = stream->next;\n    if (tmp->close_at_end) enc->callbacks.close(tmp->user_data);\n    stream_destroy(tmp);\n  }\n  if (enc->chaining_keyframe) free(enc->chaining_keyframe);\n  free(enc->buffer);\n  if (enc->oggp) oggp_destroy(enc->oggp);\n  opus_multistream_encoder_destroy(enc->st);\n  if (enc->re) speex_resampler_destroy(enc->re);\n  if (enc->lpc_buffer) free(enc->lpc_buffer);\n  free(enc);\n}\n\n/* Ends the stream and create a new stream within the same file. */\nint ope_encoder_chain_current(OggOpusEnc *enc, OggOpusComments *comments) {\n  enc->last_stream->close_at_end = 0;\n  return ope_encoder_continue_new_callbacks(enc, enc->last_stream->user_data, comments);\n}\n\n/* Ends the stream and create a new file. */\nint ope_encoder_continue_new_file(OggOpusEnc *enc, const char *path, OggOpusComments *comments) {\n  int ret;\n  struct StdioObject *obj;\n  if (!(obj = malloc(sizeof(*obj)))) return OPE_ALLOC_FAIL;\n  obj->file = fopen(path, \"wb\");\n  if (!obj->file) {\n    free(obj);\n    /* By trying to open the file first, we can recover if we can't open it. */\n    return OPE_CANNOT_OPEN;\n  }\n  ret = ope_encoder_continue_new_callbacks(enc, obj, comments);\n  if (ret == OPE_OK) return ret;\n  fclose(obj->file);\n  free(obj);\n  return ret;\n}\n\n/* Ends the stream and create a new file (callback-based). */\nint ope_encoder_continue_new_callbacks(OggOpusEnc *enc, void *user_data, OggOpusComments *comments) {\n  EncStream *new_stream;\n  if (enc->unrecoverable) return OPE_UNRECOVERABLE;\n  assert(enc->streams);\n  assert(enc->last_stream);\n  new_stream = stream_create(comments);\n  if (!new_stream) return OPE_ALLOC_FAIL;\n  new_stream->user_data = user_data;\n  new_stream->end_granule = enc->write_granule;\n  enc->last_stream->next = new_stream;\n  enc->last_stream = new_stream;\n  return OPE_OK;\n}\n\nint ope_encoder_flush_header(OggOpusEnc *enc) {\n  if (enc->unrecoverable) return OPE_UNRECOVERABLE;\n  if (enc->last_stream->header_is_frozen) return OPE_TOO_LATE;\n  if (enc->last_stream->stream_is_init) return OPE_TOO_LATE;\n  else init_stream(enc);\n  return OPE_OK;\n}\n\n/* Goes straight to the libopus ctl() functions. */\nint ope_encoder_ctl(OggOpusEnc *enc, int request, ...) {\n  int ret;\n  int translate;\n  va_list ap;\n  if (enc->unrecoverable) return OPE_UNRECOVERABLE;\n  va_start(ap, request);\n  ret = OPE_OK;\n  switch (request) {\n    case OPUS_SET_APPLICATION_REQUEST:\n    case OPUS_SET_BITRATE_REQUEST:\n    case OPUS_SET_MAX_BANDWIDTH_REQUEST:\n    case OPUS_SET_VBR_REQUEST:\n    case OPUS_SET_BANDWIDTH_REQUEST:\n    case OPUS_SET_COMPLEXITY_REQUEST:\n    case OPUS_SET_INBAND_FEC_REQUEST:\n    case OPUS_SET_PACKET_LOSS_PERC_REQUEST:\n    case OPUS_SET_DTX_REQUEST:\n    case OPUS_SET_VBR_CONSTRAINT_REQUEST:\n    case OPUS_SET_FORCE_CHANNELS_REQUEST:\n    case OPUS_SET_SIGNAL_REQUEST:\n    case OPUS_SET_LSB_DEPTH_REQUEST:\n    case OPUS_SET_PREDICTION_DISABLED_REQUEST:\n#ifdef OPUS_SET_PHASE_INVERSION_DISABLED_REQUEST\n    case OPUS_SET_PHASE_INVERSION_DISABLED_REQUEST:\n#endif\n    {\n      opus_int32 value = va_arg(ap, opus_int32);\n      ret = opus_multistream_encoder_ctl(enc->st, request, value);\n    }\n    break;\n    case OPUS_GET_LOOKAHEAD_REQUEST:\n    {\n      opus_int32 *value = va_arg(ap, opus_int32*);\n      ret = opus_multistream_encoder_ctl(enc->st, request, value);\n    }\n    break;\n    case OPUS_SET_EXPERT_FRAME_DURATION_REQUEST:\n    {\n      opus_int32 value = va_arg(ap, opus_int32);\n      int max_supported = OPUS_FRAMESIZE_60_MS;\n#ifdef OPUS_FRAMESIZE_120_MS\n      max_supported = OPUS_FRAMESIZE_120_MS;\n#endif\n      if (value < OPUS_FRAMESIZE_2_5_MS || value > max_supported) {\n        ret = OPUS_UNIMPLEMENTED;\n        break;\n      }\n      ret = opus_multistream_encoder_ctl(enc->st, request, value);\n      if (ret == OPUS_OK) {\n        if (value <= OPUS_FRAMESIZE_40_MS)\n          enc->frame_size = 120<<(value-OPUS_FRAMESIZE_2_5_MS);\n        else\n          enc->frame_size = (value-OPUS_FRAMESIZE_2_5_MS-2)*960;\n      }\n    }\n    break;\n    case OPUS_GET_APPLICATION_REQUEST:\n    case OPUS_GET_BITRATE_REQUEST:\n    case OPUS_GET_MAX_BANDWIDTH_REQUEST:\n    case OPUS_GET_VBR_REQUEST:\n    case OPUS_GET_BANDWIDTH_REQUEST:\n    case OPUS_GET_COMPLEXITY_REQUEST:\n    case OPUS_GET_INBAND_FEC_REQUEST:\n    case OPUS_GET_PACKET_LOSS_PERC_REQUEST:\n    case OPUS_GET_DTX_REQUEST:\n    case OPUS_GET_VBR_CONSTRAINT_REQUEST:\n    case OPUS_GET_FORCE_CHANNELS_REQUEST:\n    case OPUS_GET_SIGNAL_REQUEST:\n    case OPUS_GET_LSB_DEPTH_REQUEST:\n    case OPUS_GET_PREDICTION_DISABLED_REQUEST:\n#ifdef OPUS_GET_PHASE_INVERSION_DISABLED_REQUEST\n    case OPUS_GET_PHASE_INVERSION_DISABLED_REQUEST:\n#endif\n    {\n      opus_int32 *value = va_arg(ap, opus_int32*);\n      ret = opus_multistream_encoder_ctl(enc->st, request, value);\n    }\n    break;\n    case OPUS_MULTISTREAM_GET_ENCODER_STATE_REQUEST:\n    {\n      opus_int32 stream_id;\n      OpusEncoder **value;\n      stream_id = va_arg(ap, opus_int32);\n      value = va_arg(ap, OpusEncoder**);\n      ret = opus_multistream_encoder_ctl(enc->st, request, stream_id, value);\n    }\n    break;\n\n    /* ****************** libopusenc-specific requests. ********************** */\n    case OPE_SET_DECISION_DELAY_REQUEST:\n    {\n      opus_int32 value = va_arg(ap, opus_int32);\n      if (value < 0) {\n        ret = OPE_BAD_ARG;\n        break;\n      }\n      value = MIN(value, MAX_LOOKAHEAD);\n      enc->decision_delay = value;\n    }\n    break;\n    case OPE_GET_DECISION_DELAY_REQUEST:\n    {\n      opus_int32 *value = va_arg(ap, opus_int32*);\n      *value = enc->decision_delay;\n    }\n    break;\n    case OPE_SET_MUXING_DELAY_REQUEST:\n    {\n      opus_int32 value = va_arg(ap, opus_int32);\n      if (value < 0) {\n        ret = OPE_BAD_ARG;\n        break;\n      }\n      enc->max_ogg_delay = value;\n      if (enc->oggp) oggp_set_muxing_delay(enc->oggp, enc->max_ogg_delay);\n    }\n    break;\n    case OPE_GET_MUXING_DELAY_REQUEST:\n    {\n      opus_int32 *value = va_arg(ap, opus_int32*);\n      *value = enc->max_ogg_delay;\n    }\n    break;\n    case OPE_SET_COMMENT_PADDING_REQUEST:\n    {\n      opus_int32 value = va_arg(ap, opus_int32);\n      if (value < 0) {\n        ret = OPE_BAD_ARG;\n        break;\n      }\n      enc->comment_padding = value;\n      ret = OPE_OK;\n    }\n    break;\n    case OPE_GET_COMMENT_PADDING_REQUEST:\n    {\n      opus_int32 *value = va_arg(ap, opus_int32*);\n      *value = enc->comment_padding;\n    }\n    break;\n    case OPE_SET_SERIALNO_REQUEST:\n    {\n      opus_int32 value = va_arg(ap, opus_int32);\n      if (!enc->last_stream || enc->last_stream->header_is_frozen) {\n        ret = OPE_TOO_LATE;\n        break;\n      }\n      enc->last_stream->serialno = value;\n      enc->last_stream->serialno_is_set = 1;\n      ret = OPE_OK;\n    }\n    break;\n    case OPE_GET_SERIALNO_REQUEST:\n    {\n      opus_int32 *value = va_arg(ap, opus_int32*);\n      *value = enc->last_stream->serialno;\n    }\n    break;\n    case OPE_SET_PACKET_CALLBACK_REQUEST:\n    {\n      ope_packet_func value = va_arg(ap, ope_packet_func);\n      void *data = va_arg(ap, void *);\n      enc->packet_callback = value;\n      enc->packet_callback_data = data;\n      ret = OPE_OK;\n    }\n    break;\n    case OPE_SET_HEADER_GAIN_REQUEST:\n    {\n      opus_int32 value = va_arg(ap, opus_int32);\n      if (!enc->last_stream || enc->last_stream->header_is_frozen) {\n        ret = OPE_TOO_LATE;\n        break;\n      }\n      enc->header.gain = value;\n      ret = OPE_OK;\n    }\n    break;\n    case OPE_GET_HEADER_GAIN_REQUEST:\n    {\n      opus_int32 *value = va_arg(ap, opus_int32*);\n      *value = enc->header.gain;\n    }\n    break;\n    default:\n      ret = OPE_UNIMPLEMENTED;\n  }\n  va_end(ap);\n  translate = ret != 0 && (request < 14000 || (ret < 0 && ret >= -10));\n  if (translate) {\n    if (ret == OPUS_BAD_ARG) ret = OPE_BAD_ARG;\n    else if (ret == OPUS_INTERNAL_ERROR) ret = OPE_INTERNAL_ERROR;\n    else if (ret == OPUS_UNIMPLEMENTED) ret = OPE_UNIMPLEMENTED;\n    else if (ret == OPUS_ALLOC_FAIL) ret = OPE_ALLOC_FAIL;\n    else ret = OPE_INTERNAL_ERROR;\n  }\n  assert(ret == 0 || ret < -10);\n  return ret;\n}\n\nconst char *ope_strerror(int error) {\n  static const char * const ope_error_strings[5] = {\n    \"cannot open file\",\n    \"call cannot be made at this point\",\n    \"unrecoverable error\",\n    \"invalid picture file\",\n    \"invalid icon file (pictures of type 1 MUST be 32x32 PNGs)\"\n  };\n  if (error == 0) return \"success\";\n  else if (error >= -10) return \"unknown error\";\n  else if (error > -30) return opus_strerror(error+10);\n  else if (error >= OPE_INVALID_ICON) return ope_error_strings[-error-30];\n  else return \"unknown error\";\n}\n\nconst char *ope_get_version_string(void)\n{\n  return \"libopusenc \" PACKAGE_VERSION;\n}\n\nint ope_get_abi_version(void) {\n  return OPE_ABI_VERSION;\n}\n\nstatic void vorbis_lpc_from_data(float *data, float *lpci, int n, int stride);\n\nstatic void extend_signal(float *x, int before, int after, int channels) {\n  int c;\n  int i;\n  float window[LPC_PADDING];\n  if (after==0) return;\n  before = MIN(before, LPC_INPUT);\n  if (before < 4*LPC_ORDER) {\n    int i;\n    for (i=0;i<after*channels;i++) x[i] = 0;\n    return;\n  }\n  {\n    /* Generate Window using a resonating IIR aka Goertzel's algorithm. */\n    float m0=1, m1=.5*LPC_GOERTZEL_CONST;\n    float a1 = LPC_GOERTZEL_CONST;\n    window[0] = 1;\n    for (i=1;i<LPC_PADDING;i++) {\n      window[i] = a1*m0 - m1;\n      m1 = m0;\n      m0 = window[i];\n    }\n    for (i=0;i<LPC_PADDING;i++) window[i] = .5+.5*window[i];\n  }\n  for (c=0;c<channels;c++) {\n    float lpc[LPC_ORDER];\n    vorbis_lpc_from_data(x-channels*before+c, lpc, before, channels);\n    for (i=0;i<after;i++) {\n      float sum;\n      int j;\n      sum = 0;\n      for (j=0;j<LPC_ORDER;j++) sum -= x[(i-j-1)*channels + c]*lpc[j];\n      x[i*channels + c] = sum;\n    }\n    for (i=0;i<after;i++) x[i*channels + c] *= window[i];\n  }\n}\n\n/* Some of these routines (autocorrelator, LPC coefficient estimator)\n   are derived from code written by Jutta Degener and Carsten Bormann;\n   thus we include their copyright below.  The entirety of this file\n   is freely redistributable on the condition that both of these\n   copyright notices are preserved without modification.  */\n\n/* Preserved Copyright: *********************************************/\n\n/* Copyright 1992, 1993, 1994 by Jutta Degener and Carsten Bormann,\nTechnische Universita\"t Berlin\n\nAny use of this software is permitted provided that this notice is not\nremoved and that neither the authors nor the Technische Universita\"t\nBerlin are deemed to have made any representations as to the\nsuitability of this software for any purpose nor are held responsible\nfor any defects of this software. THERE IS ABSOLUTELY NO WARRANTY FOR\nTHIS SOFTWARE.\n\nAs a matter of courtesy, the authors request to be informed about uses\nthis software has found, about bugs in this software, and about any\nimprovements that may be of general interest.\n\nBerlin, 28.11.1994\nJutta Degener\nCarsten Bormann\n\n*********************************************************************/\n\nstatic void vorbis_lpc_from_data(float *data, float *lpci, int n, int stride) {\n  double aut[LPC_ORDER+1];\n  double lpc[LPC_ORDER];\n  double error;\n  double epsilon;\n  int i,j;\n\n  /* FIXME: Apply a window to the input. */\n  /* autocorrelation, p+1 lag coefficients */\n  j=LPC_ORDER+1;\n  while(j--){\n    double d=0; /* double needed for accumulator depth */\n    for(i=j;i<n;i++)d+=(double)data[i*stride]*data[(i-j)*stride];\n    aut[j]=d;\n  }\n\n  /* Apply lag windowing (better than bandwidth expansion) */\n  if (LPC_ORDER <= 64) {\n    for (i=1;i<=LPC_ORDER;i++) {\n      /* Approximate this gaussian for low enough order. */\n      /* aut[i] *= exp(-.5*(2*M_PI*.002*i)*(2*M_PI*.002*i));*/\n      aut[i] -= aut[i]*(0.008f*0.008f)*i*i;\n    }\n  }\n  /* Generate lpc coefficients from autocorr values */\n\n  /* set our noise floor to about -100dB */\n  error=aut[0] * (1. + 1e-7);\n  epsilon=1e-6*aut[0]+1e-7;\n\n  for(i=0;i<LPC_ORDER;i++){\n    double r= -aut[i+1];\n\n    if(error<epsilon){\n      memset(lpc+i,0,(LPC_ORDER-i)*sizeof(*lpc));\n      goto done;\n    }\n\n    /* Sum up this iteration's reflection coefficient; note that in\n       Vorbis we don't save it.  If anyone wants to recycle this code\n       and needs reflection coefficients, save the results of 'r' from\n       each iteration. */\n\n    for(j=0;j<i;j++)r-=lpc[j]*aut[i-j];\n    r/=error;\n\n    /* Update LPC coefficients and total error */\n\n    lpc[i]=r;\n    for(j=0;j<i/2;j++){\n      double tmp=lpc[j];\n\n      lpc[j]+=r*lpc[i-1-j];\n      lpc[i-1-j]+=r*tmp;\n    }\n    if(i&1)lpc[j]+=lpc[j]*r;\n\n    error*=1.-r*r;\n\n  }\n\n done:\n\n  /* slightly damp the filter */\n  {\n    double g = .999;\n    double damp = g;\n    for(j=0;j<LPC_ORDER;j++){\n      lpc[j]*=damp;\n      damp*=g;\n    }\n  }\n\n  for(j=0;j<LPC_ORDER;j++)lpci[j]=(float)lpc[j];\n}\n\n"
  },
  {
    "path": "ThirdParty/libopusenc/src/picture.c",
    "content": "/* Copyright (C)2007-2013 Xiph.Org Foundation\n   File: picture.c\n\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR\n   CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifdef HAVE_CONFIG_H\n# include \"config.h\"\n#endif\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"picture.h\"\n\nstatic const char BASE64_TABLE[64]={\n  'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P',\n  'Q','R','S','T','U','V','W','X','Y','Z','a','b','c','d','e','f',\n  'g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v',\n  'w','x','y','z','0','1','2','3','4','5','6','7','8','9','+','/'\n};\n\n/*Utility function for base64 encoding METADATA_BLOCK_PICTURE tags.\n  Stores BASE64_LENGTH(len)+1 bytes in dst (including a terminating NUL).*/\nvoid base64_encode(char *dst, const char *src, int len){\n  unsigned s0;\n  unsigned s1;\n  unsigned s2;\n  int      ngroups;\n  int      i;\n  ngroups=len/3;\n  for(i=0;i<ngroups;i++){\n    s0=(unsigned char)src[3*i+0];\n    s1=(unsigned char)src[3*i+1];\n    s2=(unsigned char)src[3*i+2];\n    dst[4*i+0]=BASE64_TABLE[s0>>2];\n    dst[4*i+1]=BASE64_TABLE[(s0&3)<<4|s1>>4];\n    dst[4*i+2]=BASE64_TABLE[(s1&15)<<2|s2>>6];\n    dst[4*i+3]=BASE64_TABLE[s2&63];\n  }\n  len-=3*i;\n  if(len==1){\n    s0=(unsigned char)src[3*i+0];\n    dst[4*i+0]=BASE64_TABLE[s0>>2];\n    dst[4*i+1]=BASE64_TABLE[(s0&3)<<4];\n    dst[4*i+2]='=';\n    dst[4*i+3]='=';\n    i++;\n  }\n  else if(len==2){\n    s0=(unsigned char)src[3*i+0];\n    s1=(unsigned char)src[3*i+1];\n    dst[4*i+0]=BASE64_TABLE[s0>>2];\n    dst[4*i+1]=BASE64_TABLE[(s0&3)<<4|s1>>4];\n    dst[4*i+2]=BASE64_TABLE[(s1&15)<<2];\n    dst[4*i+3]='=';\n    i++;\n  }\n  dst[4*i]='\\0';\n}\n\n/*A version of strncasecmp() that is guaranteed to only ignore the case of\n   ASCII characters.*/\nint oi_strncasecmp(const char *a, const char *b, int n){\n  int i;\n  for(i=0;i<n;i++){\n    int aval;\n    int bval;\n    int diff;\n    aval=a[i];\n    bval=b[i];\n    if(aval>='a'&&aval<='z') {\n      aval-='a'-'A';\n    }\n    if(bval>='a'&&bval<='z'){\n      bval-='a'-'A';\n    }\n    diff=aval-bval;\n    if(diff){\n      return diff;\n    }\n  }\n  return 0;\n}\n\nint is_jpeg(const unsigned char *buf, size_t length){\n  return length>=11&&memcmp(buf,\"\\xFF\\xD8\\xFF\\xE0\",4)==0\n   &&(buf[4]<<8|buf[5])>=16&&memcmp(buf+6,\"JFIF\",5)==0;\n}\n\nint is_png(const unsigned char *buf, size_t length){\n  return length>=8&&memcmp(buf,\"\\x89PNG\\x0D\\x0A\\x1A\\x0A\",8)==0;\n}\n\nint is_gif(const unsigned char *buf, size_t length){\n  return length>=6\n   &&(memcmp(buf,\"GIF87a\",6)==0||memcmp(buf,\"GIF89a\",6)==0);\n}\n\n#define READ_U32_BE(buf) \\\n    (((buf)[0]<<24)|((buf)[1]<<16)|((buf)[2]<<8)|((buf)[3]&0xff))\n\n/*Tries to extract the width, height, bits per pixel, and palette size of a\n   PNG.\n  On failure, simply leaves its outputs unmodified.*/\nvoid extract_png_params(const unsigned char *data, size_t data_length,\n                        opus_uint32 *width, opus_uint32 *height,\n                        opus_uint32 *depth, opus_uint32 *colors,\n                        int *has_palette){\n  if(is_png(data,data_length)){\n    size_t offs;\n    offs=8;\n    while(data_length-offs>=12){\n      opus_uint32 chunk_len;\n      chunk_len=READ_U32_BE(data+offs);\n      if(chunk_len>data_length-(offs+12))break;\n      else if(chunk_len==13&&memcmp(data+offs+4,\"IHDR\",4)==0){\n        int color_type;\n        *width=READ_U32_BE(data+offs+8);\n        *height=READ_U32_BE(data+offs+12);\n        color_type=data[offs+17];\n        if(color_type==3){\n          *depth=24;\n          *has_palette=1;\n        }\n        else{\n          int sample_depth;\n          sample_depth=data[offs+16];\n          if(color_type==0)*depth=sample_depth;\n          else if(color_type==2)*depth=sample_depth*3;\n          else if(color_type==4)*depth=sample_depth*2;\n          else if(color_type==6)*depth=sample_depth*4;\n          *colors=0;\n          *has_palette=0;\n          break;\n        }\n      }\n      else if(*has_palette>0&&memcmp(data+offs+4,\"PLTE\",4)==0){\n        *colors=chunk_len/3;\n        break;\n      }\n      offs+=12+chunk_len;\n    }\n  }\n}\n\n/*Tries to extract the width, height, bits per pixel, and palette size of a\n   GIF.\n  On failure, simply leaves its outputs unmodified.*/\nvoid extract_gif_params(const unsigned char *data, size_t data_length,\n                        opus_uint32 *width, opus_uint32 *height,\n                        opus_uint32 *depth, opus_uint32 *colors,\n                        int *has_palette){\n  if(is_gif(data,data_length)&&data_length>=14){\n    *width=data[6]|data[7]<<8;\n    *height=data[8]|data[9]<<8;\n    /*libFLAC hard-codes the depth to 24.*/\n    *depth=24;\n    *colors=1<<((data[10]&7)+1);\n    *has_palette=1;\n  }\n}\n\n\n/*Tries to extract the width, height, bits per pixel, and palette size of a\n   JPEG.\n  On failure, simply leaves its outputs unmodified.*/\nvoid extract_jpeg_params(const unsigned char *data, size_t data_length,\n                         opus_uint32 *width, opus_uint32 *height,\n                         opus_uint32 *depth, opus_uint32 *colors,\n                         int *has_palette){\n  if(is_jpeg(data,data_length)){\n    size_t offs;\n    offs=2;\n    for(;;){\n      size_t segment_len;\n      int    marker;\n      while(offs<data_length&&data[offs]!=0xFF)offs++;\n      while(offs<data_length&&data[offs]==0xFF)offs++;\n      marker=data[offs];\n      offs++;\n      /*If we hit EOI* (end of image), or another SOI* (start of image),\n         or SOS (start of scan), then stop now.*/\n      if(offs>=data_length||(marker>=0xD8&&marker<=0xDA))break;\n      /*RST* (restart markers): skip (no segment length).*/\n      else if(marker>=0xD0&&marker<=0xD7)continue;\n      /*Read the length of the marker segment.*/\n      if(data_length-offs<2)break;\n      segment_len=data[offs]<<8|data[offs+1];\n      if(segment_len<2||data_length-offs<segment_len)break;\n      if(marker==0xC0||(marker>0xC0&&marker<0xD0&&(marker&3)!=0)){\n        /*Found a SOFn (start of frame) marker segment:*/\n        if(segment_len>=8){\n          *height=data[offs+3]<<8|data[offs+4];\n          *width=data[offs+5]<<8|data[offs+6];\n          *depth=data[offs+2]*data[offs+7];\n          *colors=0;\n          *has_palette=0;\n        }\n        break;\n      }\n      /*Other markers: skip the whole marker segment.*/\n      offs+=segment_len;\n    }\n  }\n}\n\n#define IMAX(a,b) ((a) > (b) ? (a) : (b))\n\n/*Parse a picture SPECIFICATION as given on the command-line.\n  spec: The specification.\n  error_message: Returns an error message on error.\n  seen_file_icons: Bit flags used to track if any pictures of type 1 or type 2\n   have already been added, to ensure only one is allowed.\n  Return: A Base64-encoded string suitable for use in a METADATA_BLOCK_PICTURE\n   tag.*/\nchar *parse_picture_specification(const char *filename, int picture_type, const char *description,\n                                  int *error, int *seen_file_icons){\n  FILE          *picture_file;\n  opus_uint32  width;\n  opus_uint32  height;\n  opus_uint32  depth;\n  opus_uint32  colors;\n  unsigned char *buf;\n  const char    *mime_type;\n  char          *out;\n  size_t         cbuf;\n  size_t         nbuf;\n  size_t         data_offset;\n  size_t         data_length;\n  size_t         b64_length;\n  *error = OPE_OK;\n  /*If a filename has a '|' in it, there's no way we can distinguish it from a\n     full specification just from the spec string.\n    Instead, try to open the file.\n    If it exists, the user probably meant the file.*/\n  if (picture_type < 0) picture_type=3;\n  if (description == NULL) description = \"\";\n  picture_file=fopen(filename,\"rb\");\n  /*Buffer size: 8 static 4-byte fields plus 2 dynamic fields, plus the\n     file/URL data.\n    We reserve at least 10 bytes for the media type, in case we still need to\n     extract it from the file.*/\n  data_offset=32+strlen(description)+10;\n  buf=NULL;\n  {\n    int          has_palette;\n    /*Complicated case: we have a real file.\n      Read it in, attempt to parse the media type and image dimensions if\n       necessary, and validate what the user passed in.*/\n    if(picture_file==NULL){\n      *error = OPE_CANNOT_OPEN;\n      return NULL;\n    }\n    nbuf=data_offset;\n    /*Add a reasonable starting image file size.*/\n    cbuf=data_offset+65536;\n    for(;;){\n      unsigned char *new_buf;\n      size_t         nread;\n      new_buf=realloc(buf,cbuf);\n      if(new_buf==NULL){\n        fclose(picture_file);\n        free(buf);\n        *error = OPE_ALLOC_FAIL;\n        return NULL;\n      }\n      buf=new_buf;\n      nread=fread(buf+nbuf,1,cbuf-nbuf,picture_file);\n      nbuf+=nread;\n      if(nbuf<cbuf){\n        int file_error;\n        file_error=ferror(picture_file);\n        fclose(picture_file);\n        if(file_error){\n          free(buf);\n          *error = OPE_INVALID_PICTURE;\n          return NULL;\n        }\n        break;\n      }\n      if(cbuf==0xFFFFFFFF){\n        fclose(picture_file);\n        free(buf);\n        *error = OPE_INVALID_PICTURE;\n        return NULL;\n      }\n      else if(cbuf>0x7FFFFFFFU)cbuf=0xFFFFFFFFU;\n      else cbuf=cbuf<<1|1;\n    }\n    data_length=nbuf-data_offset;\n    /*Try to extract the image dimensions/color information from the file.*/\n    width=height=depth=colors=0;\n    has_palette=-1;\n    {\n      if(is_jpeg(buf+data_offset,data_length)){\n        mime_type=\"image/jpeg\";\n        extract_jpeg_params(buf+data_offset,data_length,\n         &width,&height,&depth,&colors,&has_palette);\n      }\n      else if(is_png(buf+data_offset,data_length)){\n        mime_type=\"image/png\";\n        extract_png_params(buf+data_offset,data_length,\n         &width,&height,&depth,&colors,&has_palette);\n      }\n      else if(is_gif(buf+data_offset,data_length)){\n        mime_type=\"image/gif\";\n        extract_gif_params(buf+data_offset,data_length,\n         &width,&height,&depth,&colors,&has_palette);\n      }\n      else{\n        free(buf);\n        *error = OPE_INVALID_PICTURE;\n        return NULL;\n      }\n    }\n  }\n  /*These fields MUST be set correctly OR all set to zero.\n    So if any of them (except colors, for which 0 is a valid value) are still\n     zero, clear the rest to zero.*/\n  if(width==0||height==0||depth==0)width=height=depth=colors=0;\n  if(picture_type==1&&(width!=32||height!=32\n   ||strlen(mime_type)!=9\n   ||oi_strncasecmp(\"image/png\",mime_type,9)!=0)){\n    free(buf);\n    *error = OPE_INVALID_ICON;\n    return NULL;\n  }\n  /*Build the METADATA_BLOCK_PICTURE buffer.\n    We do this backwards from data_offset, because we didn't necessarily know\n     how big the media type string was before we read the data in.*/\n  data_offset-=4;\n  WRITE_U32_BE(buf+data_offset,(unsigned long)data_length);\n  data_offset-=4;\n  WRITE_U32_BE(buf+data_offset,colors);\n  data_offset-=4;\n  WRITE_U32_BE(buf+data_offset,depth);\n  data_offset-=4;\n  WRITE_U32_BE(buf+data_offset,height);\n  data_offset-=4;\n  WRITE_U32_BE(buf+data_offset,width);\n  data_offset-=strlen(description);\n  memcpy(buf+data_offset,description,strlen(description));\n  data_offset-=4;\n  WRITE_U32_BE(buf+data_offset,strlen(description));\n  data_offset-=strlen(mime_type);\n  memcpy(buf+data_offset,mime_type,strlen(mime_type));\n  data_offset-=4;\n  WRITE_U32_BE(buf+data_offset,strlen(mime_type));\n  data_offset-=4;\n  WRITE_U32_BE(buf+data_offset,picture_type);\n  data_length=nbuf-data_offset;\n  b64_length=BASE64_LENGTH(data_length);\n  out=(char *)malloc(b64_length+1);\n  if(out!=NULL){\n    base64_encode(out,(char *)buf+data_offset,data_length);\n    if(picture_type>=1&&picture_type<=2)*seen_file_icons|=picture_type;\n  } else {\n    *error = OPE_ALLOC_FAIL;\n  }\n  free(buf);\n  return out;\n}\n"
  },
  {
    "path": "ThirdParty/libopusenc/src/picture.h",
    "content": "/* Copyright (C)2007-2013 Xiph.Org Foundation\n   File: picture.h\n\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR\n   CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifndef __PICTURE_H\n#define __PICTURE_H\n\n#include <opus.h>\n#include \"opusenc.h\"\n\ntypedef enum{\n  PIC_FORMAT_JPEG,\n  PIC_FORMAT_PNG,\n  PIC_FORMAT_GIF\n}picture_format;\n\n#define BASE64_LENGTH(len) (((len)+2)/3*4)\n\n/*Utility function for base64 encoding METADATA_BLOCK_PICTURE tags.\n  Stores BASE64_LENGTH(len)+1 bytes in dst (including a terminating NUL).*/\nvoid base64_encode(char *dst, const char *src, int len);\n\nint oi_strncasecmp(const char *a, const char *b, int n);\n\nint is_jpeg(const unsigned char *buf, size_t length);\nint is_png(const unsigned char *buf, size_t length);\nint is_gif(const unsigned char *buf, size_t length);\n\nvoid extract_png_params(const unsigned char *data, size_t data_length,\n                        opus_uint32 *width, opus_uint32 *height,\n                        opus_uint32 *depth, opus_uint32 *colors,\n                        int *has_palette);\nvoid extract_gif_params(const unsigned char *data, size_t data_length,\n                        opus_uint32 *width, opus_uint32 *height,\n                        opus_uint32 *depth, opus_uint32 *colors,\n                        int *has_palette);\nvoid extract_jpeg_params(const unsigned char *data, size_t data_length,\n                         opus_uint32 *width, opus_uint32 *height,\n                         opus_uint32 *depth, opus_uint32 *colors,\n                         int *has_palette);\n\nchar *parse_picture_specification(const char *filename, int picture_type, const char *description,\n                                  int *error, int *seen_file_icons);\n\n#define WRITE_U32_BE(buf, val) \\\n  do{ \\\n    (buf)[0]=(unsigned char)((val)>>24); \\\n    (buf)[1]=(unsigned char)((val)>>16); \\\n    (buf)[2]=(unsigned char)((val)>>8); \\\n    (buf)[3]=(unsigned char)(val); \\\n  } \\\n  while(0);\n\n#endif /* __PICTURE_H */\n"
  },
  {
    "path": "ThirdParty/libopusenc/src/resample.c",
    "content": "/* Copyright (C) 2007-2008 Jean-Marc Valin\n   Copyright (C) 2008      Thorvald Natvig\n\n   File: resample.c\n   Arbitrary resampling code\n\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions are\n   met:\n\n   1. Redistributions of source code must retain the above copyright notice,\n   this list of conditions and the following disclaimer.\n\n   2. Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   3. The name of the author may not be used to endorse or promote products\n   derived from this software without specific prior written permission.\n\n   THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR\n   IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES\n   OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n   DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT,\n   INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,\n   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN\n   ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n   POSSIBILITY OF SUCH DAMAGE.\n*/\n\n/*\n   The design goals of this code are:\n      - Very fast algorithm\n      - SIMD-friendly algorithm\n      - Low memory requirement\n      - Good *perceptual* quality (and not best SNR)\n\n   Warning: This resampler is relatively new. Although I think I got rid of\n   all the major bugs and I don't expect the API to change anymore, there\n   may be something I've missed. So use with caution.\n\n   This algorithm is based on this original resampling algorithm:\n   Smith, Julius O. Digital Audio Resampling Home Page\n   Center for Computer Research in Music and Acoustics (CCRMA),\n   Stanford University, 2007.\n   Web published at http://www-ccrma.stanford.edu/~jos/resample/.\n\n   There is one main difference, though. This resampler uses cubic\n   interpolation instead of linear interpolation in the above paper. This\n   makes the table much smaller and makes it possible to compute that table\n   on a per-stream basis. In turn, being able to tweak the table for each\n   stream makes it possible to both reduce complexity on simple ratios\n   (e.g. 2/3), and get rid of the rounding operations in the inner loop.\n   The latter both reduces CPU time and makes the algorithm more SIMD-friendly.\n*/\n\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include <stdlib.h>\nstatic void *speex_alloc(int size) { return calloc(size, 1); }\nstatic void *speex_realloc(void *ptr, int size) { return realloc(ptr, size); }\nstatic void speex_free(void *ptr) { free(ptr); }\n#define speex_assert(x)\n#define EXPORT\n#include \"speex_resampler.h\"\n#include \"arch.h\"\n\n#include \"stack_alloc.h\"\n#include <math.h>\n#include <limits.h>\n\n#ifndef M_PI\n#define M_PI 3.14159265358979323846\n#endif\n\n#define IMAX(a,b) ((a) > (b) ? (a) : (b))\n#define IMIN(a,b) ((a) < (b) ? (a) : (b))\n\n#ifndef NULL\n#define NULL 0\n#endif\n\n#ifndef UINT32_MAX\n#define UINT32_MAX 4294967296U\n#endif\n\n#if defined(__SSE__) && !defined(FIXED_POINT)\n#include \"resample_sse.h\"\n#endif\n\n#ifdef _USE_NEON\n#include \"resample_neon.h\"\n#endif\n\n/* Numer of elements to allocate on the stack */\n#ifdef VAR_ARRAYS\n#define FIXED_STACK_ALLOC 8192\n#else\n#define FIXED_STACK_ALLOC 1024\n#endif\n\ntypedef int (*resampler_basic_func)(SpeexResamplerState *, spx_uint32_t , const spx_word16_t *, spx_uint32_t *, spx_word16_t *, spx_uint32_t *);\n\nstruct SpeexResamplerState_ {\n   spx_uint32_t in_rate;\n   spx_uint32_t out_rate;\n   spx_uint32_t num_rate;\n   spx_uint32_t den_rate;\n\n   int    quality;\n   spx_uint32_t nb_channels;\n   spx_uint32_t filt_len;\n   spx_uint32_t mem_alloc_size;\n   spx_uint32_t buffer_size;\n   int          int_advance;\n   int          frac_advance;\n   float  cutoff;\n   spx_uint32_t oversample;\n   int          initialised;\n   int          started;\n\n   /* These are per-channel */\n   spx_int32_t  *last_sample;\n   spx_uint32_t *samp_frac_num;\n   spx_uint32_t *magic_samples;\n\n   spx_word16_t *mem;\n   spx_word16_t *sinc_table;\n   spx_uint32_t sinc_table_length;\n   resampler_basic_func resampler_ptr;\n\n   int    in_stride;\n   int    out_stride;\n} ;\n\nstatic const double kaiser12_table[68] = {\n   0.99859849, 1.00000000, 0.99859849, 0.99440475, 0.98745105, 0.97779076,\n   0.96549770, 0.95066529, 0.93340547, 0.91384741, 0.89213598, 0.86843014,\n   0.84290116, 0.81573067, 0.78710866, 0.75723148, 0.72629970, 0.69451601,\n   0.66208321, 0.62920216, 0.59606986, 0.56287762, 0.52980938, 0.49704014,\n   0.46473455, 0.43304576, 0.40211431, 0.37206735, 0.34301800, 0.31506490,\n   0.28829195, 0.26276832, 0.23854851, 0.21567274, 0.19416736, 0.17404546,\n   0.15530766, 0.13794294, 0.12192957, 0.10723616, 0.09382272, 0.08164178,\n   0.07063950, 0.06075685, 0.05193064, 0.04409466, 0.03718069, 0.03111947,\n   0.02584161, 0.02127838, 0.01736250, 0.01402878, 0.01121463, 0.00886058,\n   0.00691064, 0.00531256, 0.00401805, 0.00298291, 0.00216702, 0.00153438,\n   0.00105297, 0.00069463, 0.00043489, 0.00025272, 0.00013031, 0.0000527734,\n   0.00001000, 0.00000000};\n/*\nstatic const double kaiser12_table[36] = {\n   0.99440475, 1.00000000, 0.99440475, 0.97779076, 0.95066529, 0.91384741,\n   0.86843014, 0.81573067, 0.75723148, 0.69451601, 0.62920216, 0.56287762,\n   0.49704014, 0.43304576, 0.37206735, 0.31506490, 0.26276832, 0.21567274,\n   0.17404546, 0.13794294, 0.10723616, 0.08164178, 0.06075685, 0.04409466,\n   0.03111947, 0.02127838, 0.01402878, 0.00886058, 0.00531256, 0.00298291,\n   0.00153438, 0.00069463, 0.00025272, 0.0000527734, 0.00000500, 0.00000000};\n*/\nstatic const double kaiser10_table[36] = {\n   0.99537781, 1.00000000, 0.99537781, 0.98162644, 0.95908712, 0.92831446,\n   0.89005583, 0.84522401, 0.79486424, 0.74011713, 0.68217934, 0.62226347,\n   0.56155915, 0.50119680, 0.44221549, 0.38553619, 0.33194107, 0.28205962,\n   0.23636152, 0.19515633, 0.15859932, 0.12670280, 0.09935205, 0.07632451,\n   0.05731132, 0.04193980, 0.02979584, 0.02044510, 0.01345224, 0.00839739,\n   0.00488951, 0.00257636, 0.00115101, 0.00035515, 0.00000000, 0.00000000};\n\nstatic const double kaiser8_table[36] = {\n   0.99635258, 1.00000000, 0.99635258, 0.98548012, 0.96759014, 0.94302200,\n   0.91223751, 0.87580811, 0.83439927, 0.78875245, 0.73966538, 0.68797126,\n   0.63451750, 0.58014482, 0.52566725, 0.47185369, 0.41941150, 0.36897272,\n   0.32108304, 0.27619388, 0.23465776, 0.19672670, 0.16255380, 0.13219758,\n   0.10562887, 0.08273982, 0.06335451, 0.04724088, 0.03412321, 0.02369490,\n   0.01563093, 0.00959968, 0.00527363, 0.00233883, 0.00050000, 0.00000000};\n\nstatic const double kaiser6_table[36] = {\n   0.99733006, 1.00000000, 0.99733006, 0.98935595, 0.97618418, 0.95799003,\n   0.93501423, 0.90755855, 0.87598009, 0.84068475, 0.80211977, 0.76076565,\n   0.71712752, 0.67172623, 0.62508937, 0.57774224, 0.53019925, 0.48295561,\n   0.43647969, 0.39120616, 0.34752997, 0.30580127, 0.26632152, 0.22934058,\n   0.19505503, 0.16360756, 0.13508755, 0.10953262, 0.08693120, 0.06722600,\n   0.05031820, 0.03607231, 0.02432151, 0.01487334, 0.00752000, 0.00000000};\n\nstruct FuncDef {\n   const double *table;\n   int oversample;\n};\n\nstatic const struct FuncDef _KAISER12 = {kaiser12_table, 64};\n#define KAISER12 (&_KAISER12)\n/*static struct FuncDef _KAISER12 = {kaiser12_table, 32};\n#define KAISER12 (&_KAISER12)*/\nstatic const struct FuncDef _KAISER10 = {kaiser10_table, 32};\n#define KAISER10 (&_KAISER10)\nstatic const struct FuncDef _KAISER8 = {kaiser8_table, 32};\n#define KAISER8 (&_KAISER8)\nstatic const struct FuncDef _KAISER6 = {kaiser6_table, 32};\n#define KAISER6 (&_KAISER6)\n\nstruct QualityMapping {\n   int base_length;\n   int oversample;\n   float downsample_bandwidth;\n   float upsample_bandwidth;\n   const struct FuncDef *window_func;\n};\n\n\n/* This table maps conversion quality to internal parameters. There are two\n   reasons that explain why the up-sampling bandwidth is larger than the\n   down-sampling bandwidth:\n   1) When up-sampling, we can assume that the spectrum is already attenuated\n      close to the Nyquist rate (from an A/D or a previous resampling filter)\n   2) Any aliasing that occurs very close to the Nyquist rate will be masked\n      by the sinusoids/noise just below the Nyquist rate (guaranteed only for\n      up-sampling).\n*/\nstatic const struct QualityMapping quality_map[11] = {\n   {  8,  4, 0.830f, 0.860f, KAISER6 }, /* Q0 */\n   { 16,  4, 0.850f, 0.880f, KAISER6 }, /* Q1 */\n   { 32,  4, 0.882f, 0.910f, KAISER6 }, /* Q2 */  /* 82.3% cutoff ( ~60 dB stop) 6  */\n   { 48,  8, 0.895f, 0.917f, KAISER8 }, /* Q3 */  /* 84.9% cutoff ( ~80 dB stop) 8  */\n   { 64,  8, 0.921f, 0.940f, KAISER8 }, /* Q4 */  /* 88.7% cutoff ( ~80 dB stop) 8  */\n   { 80, 16, 0.922f, 0.940f, KAISER10}, /* Q5 */  /* 89.1% cutoff (~100 dB stop) 10 */\n   { 96, 16, 0.940f, 0.945f, KAISER10}, /* Q6 */  /* 91.5% cutoff (~100 dB stop) 10 */\n   {128, 16, 0.950f, 0.950f, KAISER10}, /* Q7 */  /* 93.1% cutoff (~100 dB stop) 10 */\n   {160, 16, 0.960f, 0.960f, KAISER10}, /* Q8 */  /* 94.5% cutoff (~100 dB stop) 10 */\n   {192, 32, 0.968f, 0.968f, KAISER12}, /* Q9 */  /* 95.5% cutoff (~100 dB stop) 10 */\n   {256, 32, 0.975f, 0.975f, KAISER12}, /* Q10 */ /* 96.6% cutoff (~100 dB stop) 10 */\n};\n/*8,24,40,56,80,104,128,160,200,256,320*/\nstatic double compute_func(float x, const struct FuncDef *func)\n{\n   float y, frac;\n   double interp[4];\n   int ind;\n   y = x*func->oversample;\n   ind = (int)floor(y);\n   frac = (y-ind);\n   /* CSE with handle the repeated powers */\n   interp[3] =  -0.1666666667*frac + 0.1666666667*(frac*frac*frac);\n   interp[2] = frac + 0.5*(frac*frac) - 0.5*(frac*frac*frac);\n   /*interp[2] = 1.f - 0.5f*frac - frac*frac + 0.5f*frac*frac*frac;*/\n   interp[0] = -0.3333333333*frac + 0.5*(frac*frac) - 0.1666666667*(frac*frac*frac);\n   /* Just to make sure we don't have rounding problems */\n   interp[1] = 1.f-interp[3]-interp[2]-interp[0];\n\n   /*sum = frac*accum[1] + (1-frac)*accum[2];*/\n   return interp[0]*func->table[ind] + interp[1]*func->table[ind+1] + interp[2]*func->table[ind+2] + interp[3]*func->table[ind+3];\n}\n\n#if 0\n#include <stdio.h>\nint main(int argc, char **argv)\n{\n   int i;\n   for (i=0;i<256;i++)\n   {\n      printf (\"%f\\n\", compute_func(i/256., KAISER12));\n   }\n   return 0;\n}\n#endif\n\n#ifdef FIXED_POINT\n/* The slow way of computing a sinc for the table. Should improve that some day */\nstatic spx_word16_t sinc(float cutoff, float x, int N, const struct FuncDef *window_func)\n{\n   /*fprintf (stderr, \"%f \", x);*/\n   float xx = x * cutoff;\n   if (fabs(x)<1e-6f)\n      return WORD2INT(32768.*cutoff);\n   else if (fabs(x) > .5f*N)\n      return 0;\n   /*FIXME: Can it really be any slower than this? */\n   return WORD2INT(32768.*cutoff*sin(M_PI*xx)/(M_PI*xx) * compute_func(fabs(2.*x/N), window_func));\n}\n#else\n/* The slow way of computing a sinc for the table. Should improve that some day */\nstatic spx_word16_t sinc(float cutoff, float x, int N, const struct FuncDef *window_func)\n{\n   /*fprintf (stderr, \"%f \", x);*/\n   float xx = x * cutoff;\n   if (fabs(x)<1e-6)\n      return cutoff;\n   else if (fabs(x) > .5*N)\n      return 0;\n   /*FIXME: Can it really be any slower than this? */\n   return cutoff*sin(M_PI*xx)/(M_PI*xx) * compute_func(fabs(2.*x/N), window_func);\n}\n#endif\n\n#ifdef FIXED_POINT\nstatic void cubic_coef(spx_word16_t x, spx_word16_t interp[4])\n{\n   /* Compute interpolation coefficients. I'm not sure whether this corresponds to cubic interpolation\n   but I know it's MMSE-optimal on a sinc */\n   spx_word16_t x2, x3;\n   x2 = MULT16_16_P15(x, x);\n   x3 = MULT16_16_P15(x, x2);\n   interp[0] = PSHR32(MULT16_16(QCONST16(-0.16667f, 15),x) + MULT16_16(QCONST16(0.16667f, 15),x3),15);\n   interp[1] = EXTRACT16(EXTEND32(x) + SHR32(SUB32(EXTEND32(x2),EXTEND32(x3)),1));\n   interp[3] = PSHR32(MULT16_16(QCONST16(-0.33333f, 15),x) + MULT16_16(QCONST16(.5f,15),x2) - MULT16_16(QCONST16(0.16667f, 15),x3),15);\n   /* Just to make sure we don't have rounding problems */\n   interp[2] = Q15_ONE-interp[0]-interp[1]-interp[3];\n   if (interp[2]<32767)\n      interp[2]+=1;\n}\n#else\nstatic void cubic_coef(spx_word16_t frac, spx_word16_t interp[4])\n{\n   /* Compute interpolation coefficients. I'm not sure whether this corresponds to cubic interpolation\n   but I know it's MMSE-optimal on a sinc */\n   interp[0] =  -0.16667f*frac + 0.16667f*frac*frac*frac;\n   interp[1] = frac + 0.5f*frac*frac - 0.5f*frac*frac*frac;\n   /*interp[2] = 1.f - 0.5f*frac - frac*frac + 0.5f*frac*frac*frac;*/\n   interp[3] = -0.33333f*frac + 0.5f*frac*frac - 0.16667f*frac*frac*frac;\n   /* Just to make sure we don't have rounding problems */\n   interp[2] = 1.-interp[0]-interp[1]-interp[3];\n}\n#endif\n\nstatic int resampler_basic_direct_single(SpeexResamplerState *st, spx_uint32_t channel_index, const spx_word16_t *in, spx_uint32_t *in_len, spx_word16_t *out, spx_uint32_t *out_len)\n{\n   const int N = st->filt_len;\n   int out_sample = 0;\n   int last_sample = st->last_sample[channel_index];\n   spx_uint32_t samp_frac_num = st->samp_frac_num[channel_index];\n   const spx_word16_t *sinc_table = st->sinc_table;\n   const int out_stride = st->out_stride;\n   const int int_advance = st->int_advance;\n   const int frac_advance = st->frac_advance;\n   const spx_uint32_t den_rate = st->den_rate;\n   spx_word32_t sum;\n\n   while (!(last_sample >= (spx_int32_t)*in_len || out_sample >= (spx_int32_t)*out_len))\n   {\n      const spx_word16_t *sinct = & sinc_table[samp_frac_num*N];\n      const spx_word16_t *iptr = & in[last_sample];\n\n#ifndef OVERRIDE_INNER_PRODUCT_SINGLE\n      int j;\n      sum = 0;\n      for(j=0;j<N;j++) sum += MULT16_16(sinct[j], iptr[j]);\n\n/*    This code is slower on most DSPs which have only 2 accumulators.\n      Plus this this forces truncation to 32 bits and you lose the HW guard bits.\n      I think we can trust the compiler and let it vectorize and/or unroll itself.\n      spx_word32_t accum[4] = {0,0,0,0};\n      for(j=0;j<N;j+=4) {\n        accum[0] += MULT16_16(sinct[j], iptr[j]);\n        accum[1] += MULT16_16(sinct[j+1], iptr[j+1]);\n        accum[2] += MULT16_16(sinct[j+2], iptr[j+2]);\n        accum[3] += MULT16_16(sinct[j+3], iptr[j+3]);\n      }\n      sum = accum[0] + accum[1] + accum[2] + accum[3];\n*/\n      sum = SATURATE32PSHR(sum, 15, 32767);\n#else\n      sum = inner_product_single(sinct, iptr, N);\n#endif\n\n      out[out_stride * out_sample++] = sum;\n      last_sample += int_advance;\n      samp_frac_num += frac_advance;\n      if (samp_frac_num >= den_rate)\n      {\n         samp_frac_num -= den_rate;\n         last_sample++;\n      }\n   }\n\n   st->last_sample[channel_index] = last_sample;\n   st->samp_frac_num[channel_index] = samp_frac_num;\n   return out_sample;\n}\n\n#ifdef FIXED_POINT\n#else\n/* This is the same as the previous function, except with a double-precision accumulator */\nstatic int resampler_basic_direct_double(SpeexResamplerState *st, spx_uint32_t channel_index, const spx_word16_t *in, spx_uint32_t *in_len, spx_word16_t *out, spx_uint32_t *out_len)\n{\n   const int N = st->filt_len;\n   int out_sample = 0;\n   int last_sample = st->last_sample[channel_index];\n   spx_uint32_t samp_frac_num = st->samp_frac_num[channel_index];\n   const spx_word16_t *sinc_table = st->sinc_table;\n   const int out_stride = st->out_stride;\n   const int int_advance = st->int_advance;\n   const int frac_advance = st->frac_advance;\n   const spx_uint32_t den_rate = st->den_rate;\n   double sum;\n\n   while (!(last_sample >= (spx_int32_t)*in_len || out_sample >= (spx_int32_t)*out_len))\n   {\n      const spx_word16_t *sinct = & sinc_table[samp_frac_num*N];\n      const spx_word16_t *iptr = & in[last_sample];\n\n#ifndef OVERRIDE_INNER_PRODUCT_DOUBLE\n      int j;\n      double accum[4] = {0,0,0,0};\n\n      for(j=0;j<N;j+=4) {\n        accum[0] += sinct[j]*iptr[j];\n        accum[1] += sinct[j+1]*iptr[j+1];\n        accum[2] += sinct[j+2]*iptr[j+2];\n        accum[3] += sinct[j+3]*iptr[j+3];\n      }\n      sum = accum[0] + accum[1] + accum[2] + accum[3];\n#else\n      sum = inner_product_double(sinct, iptr, N);\n#endif\n\n      out[out_stride * out_sample++] = PSHR32(sum, 15);\n      last_sample += int_advance;\n      samp_frac_num += frac_advance;\n      if (samp_frac_num >= den_rate)\n      {\n         samp_frac_num -= den_rate;\n         last_sample++;\n      }\n   }\n\n   st->last_sample[channel_index] = last_sample;\n   st->samp_frac_num[channel_index] = samp_frac_num;\n   return out_sample;\n}\n#endif\n\nstatic int resampler_basic_interpolate_single(SpeexResamplerState *st, spx_uint32_t channel_index, const spx_word16_t *in, spx_uint32_t *in_len, spx_word16_t *out, spx_uint32_t *out_len)\n{\n   const int N = st->filt_len;\n   int out_sample = 0;\n   int last_sample = st->last_sample[channel_index];\n   spx_uint32_t samp_frac_num = st->samp_frac_num[channel_index];\n   const int out_stride = st->out_stride;\n   const int int_advance = st->int_advance;\n   const int frac_advance = st->frac_advance;\n   const spx_uint32_t den_rate = st->den_rate;\n   spx_word32_t sum;\n\n   while (!(last_sample >= (spx_int32_t)*in_len || out_sample >= (spx_int32_t)*out_len))\n   {\n      const spx_word16_t *iptr = & in[last_sample];\n\n      const int offset = samp_frac_num*st->oversample/st->den_rate;\n#ifdef FIXED_POINT\n      const spx_word16_t frac = PDIV32(SHL32((samp_frac_num*st->oversample) % st->den_rate,15),st->den_rate);\n#else\n      const spx_word16_t frac = ((float)((samp_frac_num*st->oversample) % st->den_rate))/st->den_rate;\n#endif\n      spx_word16_t interp[4];\n\n\n#ifndef OVERRIDE_INTERPOLATE_PRODUCT_SINGLE\n      int j;\n      spx_word32_t accum[4] = {0,0,0,0};\n\n      for(j=0;j<N;j++) {\n        const spx_word16_t curr_in=iptr[j];\n        accum[0] += MULT16_16(curr_in,st->sinc_table[4+(j+1)*st->oversample-offset-2]);\n        accum[1] += MULT16_16(curr_in,st->sinc_table[4+(j+1)*st->oversample-offset-1]);\n        accum[2] += MULT16_16(curr_in,st->sinc_table[4+(j+1)*st->oversample-offset]);\n        accum[3] += MULT16_16(curr_in,st->sinc_table[4+(j+1)*st->oversample-offset+1]);\n      }\n\n      cubic_coef(frac, interp);\n      sum = MULT16_32_Q15(interp[0],SHR32(accum[0], 1)) + MULT16_32_Q15(interp[1],SHR32(accum[1], 1)) + MULT16_32_Q15(interp[2],SHR32(accum[2], 1)) + MULT16_32_Q15(interp[3],SHR32(accum[3], 1));\n      sum = SATURATE32PSHR(sum, 15, 32767);\n#else\n      cubic_coef(frac, interp);\n      sum = interpolate_product_single(iptr, st->sinc_table + st->oversample + 4 - offset - 2, N, st->oversample, interp);\n#endif\n\n      out[out_stride * out_sample++] = sum;\n      last_sample += int_advance;\n      samp_frac_num += frac_advance;\n      if (samp_frac_num >= den_rate)\n      {\n         samp_frac_num -= den_rate;\n         last_sample++;\n      }\n   }\n\n   st->last_sample[channel_index] = last_sample;\n   st->samp_frac_num[channel_index] = samp_frac_num;\n   return out_sample;\n}\n\n#ifdef FIXED_POINT\n#else\n/* This is the same as the previous function, except with a double-precision accumulator */\nstatic int resampler_basic_interpolate_double(SpeexResamplerState *st, spx_uint32_t channel_index, const spx_word16_t *in, spx_uint32_t *in_len, spx_word16_t *out, spx_uint32_t *out_len)\n{\n   const int N = st->filt_len;\n   int out_sample = 0;\n   int last_sample = st->last_sample[channel_index];\n   spx_uint32_t samp_frac_num = st->samp_frac_num[channel_index];\n   const int out_stride = st->out_stride;\n   const int int_advance = st->int_advance;\n   const int frac_advance = st->frac_advance;\n   const spx_uint32_t den_rate = st->den_rate;\n   spx_word32_t sum;\n\n   while (!(last_sample >= (spx_int32_t)*in_len || out_sample >= (spx_int32_t)*out_len))\n   {\n      const spx_word16_t *iptr = & in[last_sample];\n\n      const int offset = samp_frac_num*st->oversample/st->den_rate;\n#ifdef FIXED_POINT\n      const spx_word16_t frac = PDIV32(SHL32((samp_frac_num*st->oversample) % st->den_rate,15),st->den_rate);\n#else\n      const spx_word16_t frac = ((float)((samp_frac_num*st->oversample) % st->den_rate))/st->den_rate;\n#endif\n      spx_word16_t interp[4];\n\n\n#ifndef OVERRIDE_INTERPOLATE_PRODUCT_DOUBLE\n      int j;\n      double accum[4] = {0,0,0,0};\n\n      for(j=0;j<N;j++) {\n        const double curr_in=iptr[j];\n        accum[0] += MULT16_16(curr_in,st->sinc_table[4+(j+1)*st->oversample-offset-2]);\n        accum[1] += MULT16_16(curr_in,st->sinc_table[4+(j+1)*st->oversample-offset-1]);\n        accum[2] += MULT16_16(curr_in,st->sinc_table[4+(j+1)*st->oversample-offset]);\n        accum[3] += MULT16_16(curr_in,st->sinc_table[4+(j+1)*st->oversample-offset+1]);\n      }\n\n      cubic_coef(frac, interp);\n      sum = MULT16_32_Q15(interp[0],accum[0]) + MULT16_32_Q15(interp[1],accum[1]) + MULT16_32_Q15(interp[2],accum[2]) + MULT16_32_Q15(interp[3],accum[3]);\n#else\n      cubic_coef(frac, interp);\n      sum = interpolate_product_double(iptr, st->sinc_table + st->oversample + 4 - offset - 2, N, st->oversample, interp);\n#endif\n\n      out[out_stride * out_sample++] = PSHR32(sum,15);\n      last_sample += int_advance;\n      samp_frac_num += frac_advance;\n      if (samp_frac_num >= den_rate)\n      {\n         samp_frac_num -= den_rate;\n         last_sample++;\n      }\n   }\n\n   st->last_sample[channel_index] = last_sample;\n   st->samp_frac_num[channel_index] = samp_frac_num;\n   return out_sample;\n}\n#endif\n\n/* This resampler is used to produce zero output in situations where memory\n   for the filter could not be allocated.  The expected numbers of input and\n   output samples are still processed so that callers failing to check error\n   codes are not surprised, possibly getting into infinite loops. */\nstatic int resampler_basic_zero(SpeexResamplerState *st, spx_uint32_t channel_index, const spx_word16_t *in, spx_uint32_t *in_len, spx_word16_t *out, spx_uint32_t *out_len)\n{\n   int out_sample = 0;\n   int last_sample = st->last_sample[channel_index];\n   spx_uint32_t samp_frac_num = st->samp_frac_num[channel_index];\n   const int out_stride = st->out_stride;\n   const int int_advance = st->int_advance;\n   const int frac_advance = st->frac_advance;\n   const spx_uint32_t den_rate = st->den_rate;\n\n   while (!(last_sample >= (spx_int32_t)*in_len || out_sample >= (spx_int32_t)*out_len))\n   {\n      out[out_stride * out_sample++] = 0;\n      last_sample += int_advance;\n      samp_frac_num += frac_advance;\n      if (samp_frac_num >= den_rate)\n      {\n         samp_frac_num -= den_rate;\n         last_sample++;\n      }\n   }\n\n   st->last_sample[channel_index] = last_sample;\n   st->samp_frac_num[channel_index] = samp_frac_num;\n   return out_sample;\n}\n\nstatic int _muldiv(spx_uint32_t *result, spx_uint32_t value, spx_uint32_t mul, spx_uint32_t div)\n{\n   speex_assert(result);\n   spx_uint32_t major = value / div;\n   spx_uint32_t remainder = value % div;\n   /* TODO: Could use 64 bits operation to check for overflow. But only guaranteed in C99+ */\n   if (remainder > UINT32_MAX / mul || major > UINT32_MAX / mul\n       || major * mul > UINT32_MAX - remainder * mul / div)\n      return RESAMPLER_ERR_OVERFLOW;\n   *result = remainder * mul / div + major * mul;\n   return RESAMPLER_ERR_SUCCESS;\n}\n\nstatic int update_filter(SpeexResamplerState *st)\n{\n   spx_uint32_t old_length = st->filt_len;\n   spx_uint32_t old_alloc_size = st->mem_alloc_size;\n   int use_direct;\n   spx_uint32_t min_sinc_table_length;\n   spx_uint32_t min_alloc_size;\n\n   st->int_advance = st->num_rate/st->den_rate;\n   st->frac_advance = st->num_rate%st->den_rate;\n   st->oversample = quality_map[st->quality].oversample;\n   st->filt_len = quality_map[st->quality].base_length;\n\n   if (st->num_rate > st->den_rate)\n   {\n      /* down-sampling */\n      st->cutoff = quality_map[st->quality].downsample_bandwidth * st->den_rate / st->num_rate;\n      if (_muldiv(&st->filt_len,st->filt_len,st->num_rate,st->den_rate) != RESAMPLER_ERR_SUCCESS)\n         goto fail;\n      /* Round up to make sure we have a multiple of 8 for SSE */\n      st->filt_len = ((st->filt_len-1)&(~0x7))+8;\n      if (2*st->den_rate < st->num_rate)\n         st->oversample >>= 1;\n      if (4*st->den_rate < st->num_rate)\n         st->oversample >>= 1;\n      if (8*st->den_rate < st->num_rate)\n         st->oversample >>= 1;\n      if (16*st->den_rate < st->num_rate)\n         st->oversample >>= 1;\n      if (st->oversample < 1)\n         st->oversample = 1;\n   } else {\n      /* up-sampling */\n      st->cutoff = quality_map[st->quality].upsample_bandwidth;\n   }\n\n   /* Choose the resampling type that requires the least amount of memory */\n#ifdef RESAMPLE_FULL_SINC_TABLE\n   use_direct = 1;\n   if (INT_MAX/sizeof(spx_word16_t)/st->den_rate < st->filt_len)\n      goto fail;\n#else\n   use_direct = st->filt_len*st->den_rate <= st->filt_len*st->oversample+8\n                && INT_MAX/sizeof(spx_word16_t)/st->den_rate >= st->filt_len;\n#endif\n   if (use_direct)\n   {\n      min_sinc_table_length = st->filt_len*st->den_rate;\n   } else {\n      if ((INT_MAX/sizeof(spx_word16_t)-8)/st->oversample < st->filt_len)\n         goto fail;\n\n      min_sinc_table_length = st->filt_len*st->oversample+8;\n   }\n   if (st->sinc_table_length < min_sinc_table_length)\n   {\n      spx_word16_t *sinc_table = (spx_word16_t *)speex_realloc(st->sinc_table,min_sinc_table_length*sizeof(spx_word16_t));\n      if (!sinc_table)\n         goto fail;\n\n      st->sinc_table = sinc_table;\n      st->sinc_table_length = min_sinc_table_length;\n   }\n   if (use_direct)\n   {\n      spx_uint32_t i;\n      for (i=0;i<st->den_rate;i++)\n      {\n         spx_int32_t j;\n         for (j=0;j<st->filt_len;j++)\n         {\n            st->sinc_table[i*st->filt_len+j] = sinc(st->cutoff,((j-(spx_int32_t)st->filt_len/2+1)-((float)i)/st->den_rate), st->filt_len, quality_map[st->quality].window_func);\n         }\n      }\n#ifdef FIXED_POINT\n      st->resampler_ptr = resampler_basic_direct_single;\n#else\n      if (st->quality>8)\n         st->resampler_ptr = resampler_basic_direct_double;\n      else\n         st->resampler_ptr = resampler_basic_direct_single;\n#endif\n      /*fprintf (stderr, \"resampler uses direct sinc table and normalised cutoff %f\\n\", cutoff);*/\n   } else {\n      spx_int32_t i;\n      for (i=-4;i<(spx_int32_t)(st->oversample*st->filt_len+4);i++)\n         st->sinc_table[i+4] = sinc(st->cutoff,(i/(float)st->oversample - st->filt_len/2), st->filt_len, quality_map[st->quality].window_func);\n#ifdef FIXED_POINT\n      st->resampler_ptr = resampler_basic_interpolate_single;\n#else\n      if (st->quality>8)\n         st->resampler_ptr = resampler_basic_interpolate_double;\n      else\n         st->resampler_ptr = resampler_basic_interpolate_single;\n#endif\n      /*fprintf (stderr, \"resampler uses interpolated sinc table and normalised cutoff %f\\n\", cutoff);*/\n   }\n\n   /* Here's the place where we update the filter memory to take into account\n      the change in filter length. It's probably the messiest part of the code\n      due to handling of lots of corner cases. */\n\n   /* Adding buffer_size to filt_len won't overflow here because filt_len\n      could be multiplied by sizeof(spx_word16_t) above. */\n   min_alloc_size = st->filt_len-1 + st->buffer_size;\n   if (min_alloc_size > st->mem_alloc_size)\n   {\n      spx_word16_t *mem;\n      if (INT_MAX/sizeof(spx_word16_t)/st->nb_channels < min_alloc_size)\n          goto fail;\n      else if (!(mem = (spx_word16_t*)speex_realloc(st->mem, st->nb_channels*min_alloc_size * sizeof(*mem))))\n          goto fail;\n\n      st->mem = mem;\n      st->mem_alloc_size = min_alloc_size;\n   }\n   if (!st->started)\n   {\n      spx_uint32_t i;\n      for (i=0;i<st->nb_channels*st->mem_alloc_size;i++)\n         st->mem[i] = 0;\n      /*speex_warning(\"reinit filter\");*/\n   } else if (st->filt_len > old_length)\n   {\n      spx_uint32_t i;\n      /* Increase the filter length */\n      /*speex_warning(\"increase filter size\");*/\n      for (i=st->nb_channels;i--;)\n      {\n         spx_uint32_t j;\n         spx_uint32_t olen = old_length;\n         /*if (st->magic_samples[i])*/\n         {\n            /* Try and remove the magic samples as if nothing had happened */\n\n            /* FIXME: This is wrong but for now we need it to avoid going over the array bounds */\n            olen = old_length + 2*st->magic_samples[i];\n            for (j=old_length-1+st->magic_samples[i];j--;)\n               st->mem[i*st->mem_alloc_size+j+st->magic_samples[i]] = st->mem[i*old_alloc_size+j];\n            for (j=0;j<st->magic_samples[i];j++)\n               st->mem[i*st->mem_alloc_size+j] = 0;\n            st->magic_samples[i] = 0;\n         }\n         if (st->filt_len > olen)\n         {\n            /* If the new filter length is still bigger than the \"augmented\" length */\n            /* Copy data going backward */\n            for (j=0;j<olen-1;j++)\n               st->mem[i*st->mem_alloc_size+(st->filt_len-2-j)] = st->mem[i*st->mem_alloc_size+(olen-2-j)];\n            /* Then put zeros for lack of anything better */\n            for (;j<st->filt_len-1;j++)\n               st->mem[i*st->mem_alloc_size+(st->filt_len-2-j)] = 0;\n            /* Adjust last_sample */\n            st->last_sample[i] += (st->filt_len - olen)/2;\n         } else {\n            /* Put back some of the magic! */\n            st->magic_samples[i] = (olen - st->filt_len)/2;\n            for (j=0;j<st->filt_len-1+st->magic_samples[i];j++)\n               st->mem[i*st->mem_alloc_size+j] = st->mem[i*st->mem_alloc_size+j+st->magic_samples[i]];\n         }\n      }\n   } else if (st->filt_len < old_length)\n   {\n      spx_uint32_t i;\n      /* Reduce filter length, this a bit tricky. We need to store some of the memory as \"magic\"\n         samples so they can be used directly as input the next time(s) */\n      for (i=0;i<st->nb_channels;i++)\n      {\n         spx_uint32_t j;\n         spx_uint32_t old_magic = st->magic_samples[i];\n         st->magic_samples[i] = (old_length - st->filt_len)/2;\n         /* We must copy some of the memory that's no longer used */\n         /* Copy data going backward */\n         for (j=0;j<st->filt_len-1+st->magic_samples[i]+old_magic;j++)\n            st->mem[i*st->mem_alloc_size+j] = st->mem[i*st->mem_alloc_size+j+st->magic_samples[i]];\n         st->magic_samples[i] += old_magic;\n      }\n   }\n   return RESAMPLER_ERR_SUCCESS;\n\nfail:\n   st->resampler_ptr = resampler_basic_zero;\n   /* st->mem may still contain consumed input samples for the filter.\n      Restore filt_len so that filt_len - 1 still points to the position after\n      the last of these samples. */\n   st->filt_len = old_length;\n   return RESAMPLER_ERR_ALLOC_FAILED;\n}\n\nEXPORT SpeexResamplerState *speex_resampler_init(spx_uint32_t nb_channels, spx_uint32_t in_rate, spx_uint32_t out_rate, int quality, int *err)\n{\n   return speex_resampler_init_frac(nb_channels, in_rate, out_rate, in_rate, out_rate, quality, err);\n}\n\nEXPORT SpeexResamplerState *speex_resampler_init_frac(spx_uint32_t nb_channels, spx_uint32_t ratio_num, spx_uint32_t ratio_den, spx_uint32_t in_rate, spx_uint32_t out_rate, int quality, int *err)\n{\n   spx_uint32_t i;\n   SpeexResamplerState *st;\n   int filter_err;\n\n   if (quality > 10 || quality < 0)\n   {\n      if (err)\n         *err = RESAMPLER_ERR_INVALID_ARG;\n      return NULL;\n   }\n   st = (SpeexResamplerState *)speex_alloc(sizeof(SpeexResamplerState));\n   if (!st)\n   {\n      if (err)\n         *err = RESAMPLER_ERR_ALLOC_FAILED;\n      return NULL;\n   }\n   st->initialised = 0;\n   st->started = 0;\n   st->in_rate = 0;\n   st->out_rate = 0;\n   st->num_rate = 0;\n   st->den_rate = 0;\n   st->quality = -1;\n   st->sinc_table_length = 0;\n   st->mem_alloc_size = 0;\n   st->filt_len = 0;\n   st->mem = 0;\n   st->resampler_ptr = 0;\n\n   st->cutoff = 1.f;\n   st->nb_channels = nb_channels;\n   st->in_stride = 1;\n   st->out_stride = 1;\n\n   st->buffer_size = 160;\n\n   /* Per channel data */\n   if (!(st->last_sample = (spx_int32_t*)speex_alloc(nb_channels*sizeof(spx_int32_t))))\n      goto fail;\n   if (!(st->magic_samples = (spx_uint32_t*)speex_alloc(nb_channels*sizeof(spx_uint32_t))))\n      goto fail;\n   if (!(st->samp_frac_num = (spx_uint32_t*)speex_alloc(nb_channels*sizeof(spx_uint32_t))))\n      goto fail;\n\n   speex_resampler_set_quality(st, quality);\n   speex_resampler_set_rate_frac(st, ratio_num, ratio_den, in_rate, out_rate);\n\n   filter_err = update_filter(st);\n   if (filter_err == RESAMPLER_ERR_SUCCESS)\n   {\n      st->initialised = 1;\n   } else {\n      speex_resampler_destroy(st);\n      st = NULL;\n   }\n   if (err)\n      *err = filter_err;\n\n   return st;\n\nfail:\n   if (err)\n      *err = RESAMPLER_ERR_ALLOC_FAILED;\n   speex_resampler_destroy(st);\n   return NULL;\n}\n\nEXPORT void speex_resampler_destroy(SpeexResamplerState *st)\n{\n   speex_free(st->mem);\n   speex_free(st->sinc_table);\n   speex_free(st->last_sample);\n   speex_free(st->magic_samples);\n   speex_free(st->samp_frac_num);\n   speex_free(st);\n}\n\nstatic int speex_resampler_process_native(SpeexResamplerState *st, spx_uint32_t channel_index, spx_uint32_t *in_len, spx_word16_t *out, spx_uint32_t *out_len)\n{\n   int j=0;\n   const int N = st->filt_len;\n   int out_sample = 0;\n   spx_word16_t *mem = st->mem + channel_index * st->mem_alloc_size;\n   spx_uint32_t ilen;\n\n   st->started = 1;\n\n   /* Call the right resampler through the function ptr */\n   out_sample = st->resampler_ptr(st, channel_index, mem, in_len, out, out_len);\n\n   if (st->last_sample[channel_index] < (spx_int32_t)*in_len)\n      *in_len = st->last_sample[channel_index];\n   *out_len = out_sample;\n   st->last_sample[channel_index] -= *in_len;\n\n   ilen = *in_len;\n\n   for(j=0;j<N-1;++j)\n     mem[j] = mem[j+ilen];\n\n   return RESAMPLER_ERR_SUCCESS;\n}\n\nstatic int speex_resampler_magic(SpeexResamplerState *st, spx_uint32_t channel_index, spx_word16_t **out, spx_uint32_t out_len) {\n   spx_uint32_t tmp_in_len = st->magic_samples[channel_index];\n   spx_word16_t *mem = st->mem + channel_index * st->mem_alloc_size;\n   const int N = st->filt_len;\n\n   speex_resampler_process_native(st, channel_index, &tmp_in_len, *out, &out_len);\n\n   st->magic_samples[channel_index] -= tmp_in_len;\n\n   /* If we couldn't process all \"magic\" input samples, save the rest for next time */\n   if (st->magic_samples[channel_index])\n   {\n      spx_uint32_t i;\n      for (i=0;i<st->magic_samples[channel_index];i++)\n         mem[N-1+i]=mem[N-1+i+tmp_in_len];\n   }\n   *out += out_len*st->out_stride;\n   return out_len;\n}\n\n#ifdef FIXED_POINT\nEXPORT int speex_resampler_process_int(SpeexResamplerState *st, spx_uint32_t channel_index, const spx_int16_t *in, spx_uint32_t *in_len, spx_int16_t *out, spx_uint32_t *out_len)\n#else\nEXPORT int speex_resampler_process_float(SpeexResamplerState *st, spx_uint32_t channel_index, const float *in, spx_uint32_t *in_len, float *out, spx_uint32_t *out_len)\n#endif\n{\n   int j;\n   spx_uint32_t ilen = *in_len;\n   spx_uint32_t olen = *out_len;\n   spx_word16_t *x = st->mem + channel_index * st->mem_alloc_size;\n   const int filt_offs = st->filt_len - 1;\n   const spx_uint32_t xlen = st->mem_alloc_size - filt_offs;\n   const int istride = st->in_stride;\n\n   if (st->magic_samples[channel_index])\n      olen -= speex_resampler_magic(st, channel_index, &out, olen);\n   if (! st->magic_samples[channel_index]) {\n      while (ilen && olen) {\n        spx_uint32_t ichunk = (ilen > xlen) ? xlen : ilen;\n        spx_uint32_t ochunk = olen;\n\n        if (in) {\n           for(j=0;j<ichunk;++j)\n              x[j+filt_offs]=in[j*istride];\n        } else {\n          for(j=0;j<ichunk;++j)\n            x[j+filt_offs]=0;\n        }\n        speex_resampler_process_native(st, channel_index, &ichunk, out, &ochunk);\n        ilen -= ichunk;\n        olen -= ochunk;\n        out += ochunk * st->out_stride;\n        if (in)\n           in += ichunk * istride;\n      }\n   }\n   *in_len -= ilen;\n   *out_len -= olen;\n   return st->resampler_ptr == resampler_basic_zero ? RESAMPLER_ERR_ALLOC_FAILED : RESAMPLER_ERR_SUCCESS;\n}\n\n#ifdef FIXED_POINT\nEXPORT int speex_resampler_process_float(SpeexResamplerState *st, spx_uint32_t channel_index, const float *in, spx_uint32_t *in_len, float *out, spx_uint32_t *out_len)\n#else\nEXPORT int speex_resampler_process_int(SpeexResamplerState *st, spx_uint32_t channel_index, const spx_int16_t *in, spx_uint32_t *in_len, spx_int16_t *out, spx_uint32_t *out_len)\n#endif\n{\n   int j;\n   const int istride_save = st->in_stride;\n   const int ostride_save = st->out_stride;\n   spx_uint32_t ilen = *in_len;\n   spx_uint32_t olen = *out_len;\n   spx_word16_t *x = st->mem + channel_index * st->mem_alloc_size;\n   const spx_uint32_t xlen = st->mem_alloc_size - (st->filt_len - 1);\n#ifdef VAR_ARRAYS\n   const unsigned int ylen = (olen < FIXED_STACK_ALLOC) ? olen : FIXED_STACK_ALLOC;\n   VARDECL(spx_word16_t *ystack);\n   ALLOC(ystack, ylen, spx_word16_t);\n#else\n   const unsigned int ylen = FIXED_STACK_ALLOC;\n   spx_word16_t ystack[FIXED_STACK_ALLOC];\n#endif\n\n   st->out_stride = 1;\n\n   while (ilen && olen) {\n     spx_word16_t *y = ystack;\n     spx_uint32_t ichunk = (ilen > xlen) ? xlen : ilen;\n     spx_uint32_t ochunk = (olen > ylen) ? ylen : olen;\n     spx_uint32_t omagic = 0;\n\n     if (st->magic_samples[channel_index]) {\n       omagic = speex_resampler_magic(st, channel_index, &y, ochunk);\n       ochunk -= omagic;\n       olen -= omagic;\n     }\n     if (! st->magic_samples[channel_index]) {\n       if (in) {\n         for(j=0;j<ichunk;++j)\n#ifdef FIXED_POINT\n           x[j+st->filt_len-1]=WORD2INT(in[j*istride_save]);\n#else\n           x[j+st->filt_len-1]=in[j*istride_save];\n#endif\n       } else {\n         for(j=0;j<ichunk;++j)\n           x[j+st->filt_len-1]=0;\n       }\n\n       speex_resampler_process_native(st, channel_index, &ichunk, y, &ochunk);\n     } else {\n       ichunk = 0;\n       ochunk = 0;\n     }\n\n     for (j=0;j<ochunk+omagic;++j)\n#ifdef FIXED_POINT\n        out[j*ostride_save] = ystack[j];\n#else\n        out[j*ostride_save] = WORD2INT(ystack[j]);\n#endif\n\n     ilen -= ichunk;\n     olen -= ochunk;\n     out += (ochunk+omagic) * ostride_save;\n     if (in)\n       in += ichunk * istride_save;\n   }\n   st->out_stride = ostride_save;\n   *in_len -= ilen;\n   *out_len -= olen;\n\n   return st->resampler_ptr == resampler_basic_zero ? RESAMPLER_ERR_ALLOC_FAILED : RESAMPLER_ERR_SUCCESS;\n}\n\nEXPORT int speex_resampler_process_interleaved_float(SpeexResamplerState *st, const float *in, spx_uint32_t *in_len, float *out, spx_uint32_t *out_len)\n{\n   spx_uint32_t i;\n   int istride_save, ostride_save;\n   spx_uint32_t bak_out_len = *out_len;\n   spx_uint32_t bak_in_len = *in_len;\n   istride_save = st->in_stride;\n   ostride_save = st->out_stride;\n   st->in_stride = st->out_stride = st->nb_channels;\n   for (i=0;i<st->nb_channels;i++)\n   {\n      *out_len = bak_out_len;\n      *in_len = bak_in_len;\n      if (in != NULL)\n         speex_resampler_process_float(st, i, in+i, in_len, out+i, out_len);\n      else\n         speex_resampler_process_float(st, i, NULL, in_len, out+i, out_len);\n   }\n   st->in_stride = istride_save;\n   st->out_stride = ostride_save;\n   return st->resampler_ptr == resampler_basic_zero ? RESAMPLER_ERR_ALLOC_FAILED : RESAMPLER_ERR_SUCCESS;\n}\n\nEXPORT int speex_resampler_process_interleaved_int(SpeexResamplerState *st, const spx_int16_t *in, spx_uint32_t *in_len, spx_int16_t *out, spx_uint32_t *out_len)\n{\n   spx_uint32_t i;\n   int istride_save, ostride_save;\n   spx_uint32_t bak_out_len = *out_len;\n   spx_uint32_t bak_in_len = *in_len;\n   istride_save = st->in_stride;\n   ostride_save = st->out_stride;\n   st->in_stride = st->out_stride = st->nb_channels;\n   for (i=0;i<st->nb_channels;i++)\n   {\n      *out_len = bak_out_len;\n      *in_len = bak_in_len;\n      if (in != NULL)\n         speex_resampler_process_int(st, i, in+i, in_len, out+i, out_len);\n      else\n         speex_resampler_process_int(st, i, NULL, in_len, out+i, out_len);\n   }\n   st->in_stride = istride_save;\n   st->out_stride = ostride_save;\n   return st->resampler_ptr == resampler_basic_zero ? RESAMPLER_ERR_ALLOC_FAILED : RESAMPLER_ERR_SUCCESS;\n}\n\nEXPORT int speex_resampler_set_rate(SpeexResamplerState *st, spx_uint32_t in_rate, spx_uint32_t out_rate)\n{\n   return speex_resampler_set_rate_frac(st, in_rate, out_rate, in_rate, out_rate);\n}\n\nEXPORT void speex_resampler_get_rate(SpeexResamplerState *st, spx_uint32_t *in_rate, spx_uint32_t *out_rate)\n{\n   *in_rate = st->in_rate;\n   *out_rate = st->out_rate;\n}\n\nstatic inline spx_uint32_t _gcd(spx_uint32_t a, spx_uint32_t b)\n{\n   while (b != 0)\n   {\n      spx_uint32_t temp = a;\n\n      a = b;\n      b = temp % b;\n   }\n   return a;\n}\n\nEXPORT int speex_resampler_set_rate_frac(SpeexResamplerState *st, spx_uint32_t ratio_num, spx_uint32_t ratio_den, spx_uint32_t in_rate, spx_uint32_t out_rate)\n{\n   spx_uint32_t fact;\n   spx_uint32_t old_den;\n   spx_uint32_t i;\n   if (st->in_rate == in_rate && st->out_rate == out_rate && st->num_rate == ratio_num && st->den_rate == ratio_den)\n      return RESAMPLER_ERR_SUCCESS;\n\n   old_den = st->den_rate;\n   st->in_rate = in_rate;\n   st->out_rate = out_rate;\n   st->num_rate = ratio_num;\n   st->den_rate = ratio_den;\n\n   fact = _gcd (st->num_rate, st->den_rate);\n\n   st->num_rate /= fact;\n   st->den_rate /= fact;\n\n   if (old_den > 0)\n   {\n      for (i=0;i<st->nb_channels;i++)\n      {\n         if (_muldiv(&st->samp_frac_num[i],st->samp_frac_num[i],st->den_rate,old_den) != RESAMPLER_ERR_SUCCESS)\n            return RESAMPLER_ERR_OVERFLOW;\n         /* Safety net */\n         if (st->samp_frac_num[i] >= st->den_rate)\n            st->samp_frac_num[i] = st->den_rate-1;\n      }\n   }\n\n   if (st->initialised)\n      return update_filter(st);\n   return RESAMPLER_ERR_SUCCESS;\n}\n\nEXPORT void speex_resampler_get_ratio(SpeexResamplerState *st, spx_uint32_t *ratio_num, spx_uint32_t *ratio_den)\n{\n   *ratio_num = st->num_rate;\n   *ratio_den = st->den_rate;\n}\n\nEXPORT int speex_resampler_set_quality(SpeexResamplerState *st, int quality)\n{\n   if (quality > 10 || quality < 0)\n      return RESAMPLER_ERR_INVALID_ARG;\n   if (st->quality == quality)\n      return RESAMPLER_ERR_SUCCESS;\n   st->quality = quality;\n   if (st->initialised)\n      return update_filter(st);\n   return RESAMPLER_ERR_SUCCESS;\n}\n\nEXPORT void speex_resampler_get_quality(SpeexResamplerState *st, int *quality)\n{\n   *quality = st->quality;\n}\n\nEXPORT void speex_resampler_set_input_stride(SpeexResamplerState *st, spx_uint32_t stride)\n{\n   st->in_stride = stride;\n}\n\nEXPORT void speex_resampler_get_input_stride(SpeexResamplerState *st, spx_uint32_t *stride)\n{\n   *stride = st->in_stride;\n}\n\nEXPORT void speex_resampler_set_output_stride(SpeexResamplerState *st, spx_uint32_t stride)\n{\n   st->out_stride = stride;\n}\n\nEXPORT void speex_resampler_get_output_stride(SpeexResamplerState *st, spx_uint32_t *stride)\n{\n   *stride = st->out_stride;\n}\n\nEXPORT int speex_resampler_get_input_latency(SpeexResamplerState *st)\n{\n  return st->filt_len / 2;\n}\n\nEXPORT int speex_resampler_get_output_latency(SpeexResamplerState *st)\n{\n  return ((st->filt_len / 2) * st->den_rate + (st->num_rate >> 1)) / st->num_rate;\n}\n\nEXPORT int speex_resampler_skip_zeros(SpeexResamplerState *st)\n{\n   spx_uint32_t i;\n   for (i=0;i<st->nb_channels;i++)\n      st->last_sample[i] = st->filt_len/2;\n   return RESAMPLER_ERR_SUCCESS;\n}\n\nEXPORT int speex_resampler_reset_mem(SpeexResamplerState *st)\n{\n   spx_uint32_t i;\n   for (i=0;i<st->nb_channels;i++)\n   {\n      st->last_sample[i] = 0;\n      st->magic_samples[i] = 0;\n      st->samp_frac_num[i] = 0;\n   }\n   for (i=0;i<st->nb_channels*(st->filt_len-1);i++)\n      st->mem[i] = 0;\n   return RESAMPLER_ERR_SUCCESS;\n}\n\nEXPORT const char *speex_resampler_strerror(int err)\n{\n   switch (err)\n   {\n      case RESAMPLER_ERR_SUCCESS:\n         return \"Success.\";\n      case RESAMPLER_ERR_ALLOC_FAILED:\n         return \"Memory allocation failed.\";\n      case RESAMPLER_ERR_BAD_STATE:\n         return \"Bad resampler state.\";\n      case RESAMPLER_ERR_INVALID_ARG:\n         return \"Invalid argument.\";\n      case RESAMPLER_ERR_PTR_OVERLAP:\n         return \"Input and output buffers overlap.\";\n      default:\n         return \"Unknown error. Bad error code or strange version mismatch.\";\n   }\n}\n"
  },
  {
    "path": "ThirdParty/libopusenc/src/resample_sse.h",
    "content": "/* Copyright (C) 2007-2008 Jean-Marc Valin\n * Copyright (C) 2008 Thorvald Natvig\n */\n/**\n   @file resample_sse.h\n   @brief Resampler functions (SSE version)\n*/\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   - Neither the name of the Xiph.org Foundation nor the names of its\n   contributors may be used to endorse or promote products derived from\n   this software without specific prior written permission.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR\n   CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#include <xmmintrin.h>\n\n#define OVERRIDE_INNER_PRODUCT_SINGLE\nstatic inline float inner_product_single(const float *a, const float *b, unsigned int len)\n{\n   int i;\n   float ret;\n   __m128 sum = _mm_setzero_ps();\n   for (i=0;i<len;i+=8)\n   {\n      sum = _mm_add_ps(sum, _mm_mul_ps(_mm_loadu_ps(a+i), _mm_loadu_ps(b+i)));\n      sum = _mm_add_ps(sum, _mm_mul_ps(_mm_loadu_ps(a+i+4), _mm_loadu_ps(b+i+4)));\n   }\n   sum = _mm_add_ps(sum, _mm_movehl_ps(sum, sum));\n   sum = _mm_add_ss(sum, _mm_shuffle_ps(sum, sum, 0x55));\n   _mm_store_ss(&ret, sum);\n   return ret;\n}\n\n#define OVERRIDE_INTERPOLATE_PRODUCT_SINGLE\nstatic inline float interpolate_product_single(const float *a, const float *b, unsigned int len, const spx_uint32_t oversample, float *frac) {\n  int i;\n  float ret;\n  __m128 sum = _mm_setzero_ps();\n  __m128 f = _mm_loadu_ps(frac);\n  for(i=0;i<len;i+=2)\n  {\n    sum = _mm_add_ps(sum, _mm_mul_ps(_mm_load1_ps(a+i), _mm_loadu_ps(b+i*oversample)));\n    sum = _mm_add_ps(sum, _mm_mul_ps(_mm_load1_ps(a+i+1), _mm_loadu_ps(b+(i+1)*oversample)));\n  }\n   sum = _mm_mul_ps(f, sum);\n   sum = _mm_add_ps(sum, _mm_movehl_ps(sum, sum));\n   sum = _mm_add_ss(sum, _mm_shuffle_ps(sum, sum, 0x55));\n   _mm_store_ss(&ret, sum);\n   return ret;\n}\n\n#ifdef __SSE2__\n#include <emmintrin.h>\n#define OVERRIDE_INNER_PRODUCT_DOUBLE\n\nstatic inline double inner_product_double(const float *a, const float *b, unsigned int len)\n{\n   int i;\n   double ret;\n   __m128d sum = _mm_setzero_pd();\n   __m128 t;\n   for (i=0;i<len;i+=8)\n   {\n      t = _mm_mul_ps(_mm_loadu_ps(a+i), _mm_loadu_ps(b+i));\n      sum = _mm_add_pd(sum, _mm_cvtps_pd(t));\n      sum = _mm_add_pd(sum, _mm_cvtps_pd(_mm_movehl_ps(t, t)));\n\n      t = _mm_mul_ps(_mm_loadu_ps(a+i+4), _mm_loadu_ps(b+i+4));\n      sum = _mm_add_pd(sum, _mm_cvtps_pd(t));\n      sum = _mm_add_pd(sum, _mm_cvtps_pd(_mm_movehl_ps(t, t)));\n   }\n   sum = _mm_add_sd(sum, _mm_unpackhi_pd(sum, sum));\n   _mm_store_sd(&ret, sum);\n   return ret;\n}\n\n#define OVERRIDE_INTERPOLATE_PRODUCT_DOUBLE\nstatic inline double interpolate_product_double(const float *a, const float *b, unsigned int len, const spx_uint32_t oversample, float *frac) {\n  int i;\n  double ret;\n  __m128d sum;\n  __m128d sum1 = _mm_setzero_pd();\n  __m128d sum2 = _mm_setzero_pd();\n  __m128 f = _mm_loadu_ps(frac);\n  __m128d f1 = _mm_cvtps_pd(f);\n  __m128d f2 = _mm_cvtps_pd(_mm_movehl_ps(f,f));\n  __m128 t;\n  for(i=0;i<len;i+=2)\n  {\n    t = _mm_mul_ps(_mm_load1_ps(a+i), _mm_loadu_ps(b+i*oversample));\n    sum1 = _mm_add_pd(sum1, _mm_cvtps_pd(t));\n    sum2 = _mm_add_pd(sum2, _mm_cvtps_pd(_mm_movehl_ps(t, t)));\n\n    t = _mm_mul_ps(_mm_load1_ps(a+i+1), _mm_loadu_ps(b+(i+1)*oversample));\n    sum1 = _mm_add_pd(sum1, _mm_cvtps_pd(t));\n    sum2 = _mm_add_pd(sum2, _mm_cvtps_pd(_mm_movehl_ps(t, t)));\n  }\n  sum1 = _mm_mul_pd(f1, sum1);\n  sum2 = _mm_mul_pd(f2, sum2);\n  sum = _mm_add_pd(sum1, sum2);\n  sum = _mm_add_sd(sum, _mm_unpackhi_pd(sum, sum));\n  _mm_store_sd(&ret, sum);\n  return ret;\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/libopusenc/src/speex_resampler.h",
    "content": "/* Copyright (C) 2007 Jean-Marc Valin\n\n   File: speex_resampler.h\n   Resampling code\n\n   The design goals of this code are:\n      - Very fast algorithm\n      - Low memory requirement\n      - Good *perceptual* quality (and not best SNR)\n\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions are\n   met:\n\n   1. Redistributions of source code must retain the above copyright notice,\n   this list of conditions and the following disclaimer.\n\n   2. Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   3. The name of the author may not be used to endorse or promote products\n   derived from this software without specific prior written permission.\n\n   THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR\n   IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES\n   OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n   DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT,\n   INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,\n   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN\n   ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n   POSSIBILITY OF SUCH DAMAGE.\n*/\n\n\n#ifndef SPEEX_RESAMPLER_H\n#define SPEEX_RESAMPLER_H\n\n\n#define CAT_PREFIX2(a,b) a ## b\n#define CAT_PREFIX(a,b) CAT_PREFIX2(a, b)\n\n#define speex_resampler_init CAT_PREFIX(RANDOM_PREFIX,_resampler_init)\n#define speex_resampler_init_frac CAT_PREFIX(RANDOM_PREFIX,_resampler_init_frac)\n#define speex_resampler_destroy CAT_PREFIX(RANDOM_PREFIX,_resampler_destroy)\n#define speex_resampler_process_float CAT_PREFIX(RANDOM_PREFIX,_resampler_process_float)\n#define speex_resampler_process_int CAT_PREFIX(RANDOM_PREFIX,_resampler_process_int)\n#define speex_resampler_process_interleaved_float CAT_PREFIX(RANDOM_PREFIX,_resampler_process_interleaved_float)\n#define speex_resampler_process_interleaved_int CAT_PREFIX(RANDOM_PREFIX,_resampler_process_interleaved_int)\n#define speex_resampler_set_rate CAT_PREFIX(RANDOM_PREFIX,_resampler_set_rate)\n#define speex_resampler_get_rate CAT_PREFIX(RANDOM_PREFIX,_resampler_get_rate)\n#define speex_resampler_set_rate_frac CAT_PREFIX(RANDOM_PREFIX,_resampler_set_rate_frac)\n#define speex_resampler_get_ratio CAT_PREFIX(RANDOM_PREFIX,_resampler_get_ratio)\n#define speex_resampler_set_quality CAT_PREFIX(RANDOM_PREFIX,_resampler_set_quality)\n#define speex_resampler_get_quality CAT_PREFIX(RANDOM_PREFIX,_resampler_get_quality)\n#define speex_resampler_set_input_stride CAT_PREFIX(RANDOM_PREFIX,_resampler_set_input_stride)\n#define speex_resampler_get_input_stride CAT_PREFIX(RANDOM_PREFIX,_resampler_get_input_stride)\n#define speex_resampler_set_output_stride CAT_PREFIX(RANDOM_PREFIX,_resampler_set_output_stride)\n#define speex_resampler_get_output_stride CAT_PREFIX(RANDOM_PREFIX,_resampler_get_output_stride)\n#define speex_resampler_get_input_latency CAT_PREFIX(RANDOM_PREFIX,_resampler_get_input_latency)\n#define speex_resampler_get_output_latency CAT_PREFIX(RANDOM_PREFIX,_resampler_get_output_latency)\n#define speex_resampler_skip_zeros CAT_PREFIX(RANDOM_PREFIX,_resampler_skip_zeros)\n#define speex_resampler_reset_mem CAT_PREFIX(RANDOM_PREFIX,_resampler_reset_mem)\n#define speex_resampler_strerror CAT_PREFIX(RANDOM_PREFIX,_resampler_strerror)\n\n#define spx_int16_t short\n#define spx_int32_t int\n#define spx_uint16_t unsigned short\n#define spx_uint32_t unsigned int\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#define SPEEX_RESAMPLER_QUALITY_MAX 10\n#define SPEEX_RESAMPLER_QUALITY_MIN 0\n#define SPEEX_RESAMPLER_QUALITY_DEFAULT 4\n#define SPEEX_RESAMPLER_QUALITY_VOIP 3\n#define SPEEX_RESAMPLER_QUALITY_DESKTOP 5\n\nenum {\n   RESAMPLER_ERR_SUCCESS         = 0,\n   RESAMPLER_ERR_ALLOC_FAILED    = 1,\n   RESAMPLER_ERR_BAD_STATE       = 2,\n   RESAMPLER_ERR_INVALID_ARG     = 3,\n   RESAMPLER_ERR_PTR_OVERLAP     = 4,\n   RESAMPLER_ERR_OVERFLOW        = 5,\n\n   RESAMPLER_ERR_MAX_ERROR\n};\n\nstruct SpeexResamplerState_;\ntypedef struct SpeexResamplerState_ SpeexResamplerState;\n\n/** Create a new resampler with integer input and output rates.\n * @param nb_channels Number of channels to be processed\n * @param in_rate Input sampling rate (integer number of Hz).\n * @param out_rate Output sampling rate (integer number of Hz).\n * @param quality Resampling quality between 0 and 10, where 0 has poor quality\n * and 10 has very high quality.\n * @return Newly created resampler state\n * @retval NULL Error: not enough memory\n */\nSpeexResamplerState *speex_resampler_init(spx_uint32_t nb_channels,\n                                          spx_uint32_t in_rate,\n                                          spx_uint32_t out_rate,\n                                          int quality,\n                                          int *err);\n\n/** Create a new resampler with fractional input/output rates. The sampling\n * rate ratio is an arbitrary rational number with both the numerator and\n * denominator being 32-bit integers.\n * @param nb_channels Number of channels to be processed\n * @param ratio_num Numerator of the sampling rate ratio\n * @param ratio_den Denominator of the sampling rate ratio\n * @param in_rate Input sampling rate rounded to the nearest integer (in Hz).\n * @param out_rate Output sampling rate rounded to the nearest integer (in Hz).\n * @param quality Resampling quality between 0 and 10, where 0 has poor quality\n * and 10 has very high quality.\n * @return Newly created resampler state\n * @retval NULL Error: not enough memory\n */\nSpeexResamplerState *speex_resampler_init_frac(spx_uint32_t nb_channels,\n                                               spx_uint32_t ratio_num,\n                                               spx_uint32_t ratio_den,\n                                               spx_uint32_t in_rate,\n                                               spx_uint32_t out_rate,\n                                               int quality,\n                                               int *err);\n\n/** Destroy a resampler state.\n * @param st Resampler state\n */\nvoid speex_resampler_destroy(SpeexResamplerState *st);\n\n/** Resample a float array. The input and output buffers must *not* overlap.\n * @param st Resampler state\n * @param channel_index Index of the channel to process for the multi-channel\n * base (0 otherwise)\n * @param in Input buffer\n * @param in_len Number of input samples in the input buffer. Returns the\n * number of samples processed\n * @param out Output buffer\n * @param out_len Size of the output buffer. Returns the number of samples written\n */\nint speex_resampler_process_float(SpeexResamplerState *st,\n                                   spx_uint32_t channel_index,\n                                   const float *in,\n                                   spx_uint32_t *in_len,\n                                   float *out,\n                                   spx_uint32_t *out_len);\n\n/** Resample an int array. The input and output buffers must *not* overlap.\n * @param st Resampler state\n * @param channel_index Index of the channel to process for the multi-channel\n * base (0 otherwise)\n * @param in Input buffer\n * @param in_len Number of input samples in the input buffer. Returns the number\n * of samples processed\n * @param out Output buffer\n * @param out_len Size of the output buffer. Returns the number of samples written\n */\nint speex_resampler_process_int(SpeexResamplerState *st,\n                                 spx_uint32_t channel_index,\n                                 const spx_int16_t *in,\n                                 spx_uint32_t *in_len,\n                                 spx_int16_t *out,\n                                 spx_uint32_t *out_len);\n\n/** Resample an interleaved float array. The input and output buffers must *not* overlap.\n * @param st Resampler state\n * @param in Input buffer\n * @param in_len Number of input samples in the input buffer. Returns the number\n * of samples processed. This is all per-channel.\n * @param out Output buffer\n * @param out_len Size of the output buffer. Returns the number of samples written.\n * This is all per-channel.\n */\nint speex_resampler_process_interleaved_float(SpeexResamplerState *st,\n                                               const float *in,\n                                               spx_uint32_t *in_len,\n                                               float *out,\n                                               spx_uint32_t *out_len);\n\n/** Resample an interleaved int array. The input and output buffers must *not* overlap.\n * @param st Resampler state\n * @param in Input buffer\n * @param in_len Number of input samples in the input buffer. Returns the number\n * of samples processed. This is all per-channel.\n * @param out Output buffer\n * @param out_len Size of the output buffer. Returns the number of samples written.\n * This is all per-channel.\n */\nint speex_resampler_process_interleaved_int(SpeexResamplerState *st,\n                                             const spx_int16_t *in,\n                                             spx_uint32_t *in_len,\n                                             spx_int16_t *out,\n                                             spx_uint32_t *out_len);\n\n/** Set (change) the input/output sampling rates (integer value).\n * @param st Resampler state\n * @param in_rate Input sampling rate (integer number of Hz).\n * @param out_rate Output sampling rate (integer number of Hz).\n */\nint speex_resampler_set_rate(SpeexResamplerState *st,\n                              spx_uint32_t in_rate,\n                              spx_uint32_t out_rate);\n\n/** Get the current input/output sampling rates (integer value).\n * @param st Resampler state\n * @param in_rate Input sampling rate (integer number of Hz) copied.\n * @param out_rate Output sampling rate (integer number of Hz) copied.\n */\nvoid speex_resampler_get_rate(SpeexResamplerState *st,\n                              spx_uint32_t *in_rate,\n                              spx_uint32_t *out_rate);\n\n/** Set (change) the input/output sampling rates and resampling ratio\n * (fractional values in Hz supported).\n * @param st Resampler state\n * @param ratio_num Numerator of the sampling rate ratio\n * @param ratio_den Denominator of the sampling rate ratio\n * @param in_rate Input sampling rate rounded to the nearest integer (in Hz).\n * @param out_rate Output sampling rate rounded to the nearest integer (in Hz).\n */\nint speex_resampler_set_rate_frac(SpeexResamplerState *st,\n                                   spx_uint32_t ratio_num,\n                                   spx_uint32_t ratio_den,\n                                   spx_uint32_t in_rate,\n                                   spx_uint32_t out_rate);\n\n/** Get the current resampling ratio. This will be reduced to the least\n * common denominator.\n * @param st Resampler state\n * @param ratio_num Numerator of the sampling rate ratio copied\n * @param ratio_den Denominator of the sampling rate ratio copied\n */\nvoid speex_resampler_get_ratio(SpeexResamplerState *st,\n                               spx_uint32_t *ratio_num,\n                               spx_uint32_t *ratio_den);\n\n/** Set (change) the conversion quality.\n * @param st Resampler state\n * @param quality Resampling quality between 0 and 10, where 0 has poor\n * quality and 10 has very high quality.\n */\nint speex_resampler_set_quality(SpeexResamplerState *st,\n                                 int quality);\n\n/** Get the conversion quality.\n * @param st Resampler state\n * @param quality Resampling quality between 0 and 10, where 0 has poor\n * quality and 10 has very high quality.\n */\nvoid speex_resampler_get_quality(SpeexResamplerState *st,\n                                 int *quality);\n\n/** Set (change) the input stride.\n * @param st Resampler state\n * @param stride Input stride\n */\nvoid speex_resampler_set_input_stride(SpeexResamplerState *st,\n                                      spx_uint32_t stride);\n\n/** Get the input stride.\n * @param st Resampler state\n * @param stride Input stride copied\n */\nvoid speex_resampler_get_input_stride(SpeexResamplerState *st,\n                                      spx_uint32_t *stride);\n\n/** Set (change) the output stride.\n * @param st Resampler state\n * @param stride Output stride\n */\nvoid speex_resampler_set_output_stride(SpeexResamplerState *st,\n                                      spx_uint32_t stride);\n\n/** Get the output stride.\n * @param st Resampler state copied\n * @param stride Output stride\n */\nvoid speex_resampler_get_output_stride(SpeexResamplerState *st,\n                                      spx_uint32_t *stride);\n\n/** Get the latency introduced by the resampler measured in input samples.\n * @param st Resampler state\n */\nint speex_resampler_get_input_latency(SpeexResamplerState *st);\n\n/** Get the latency introduced by the resampler measured in output samples.\n * @param st Resampler state\n */\nint speex_resampler_get_output_latency(SpeexResamplerState *st);\n\n/** Make sure that the first samples to go out of the resamplers don't have\n * leading zeros. This is only useful before starting to use a newly created\n * resampler. It is recommended to use that when resampling an audio file, as\n * it will generate a file with the same length. For real-time processing,\n * it is probably easier not to use this call (so that the output duration\n * is the same for the first frame).\n * @param st Resampler state\n */\nint speex_resampler_skip_zeros(SpeexResamplerState *st);\n\n/** Reset a resampler so a new (unrelated) stream can be processed.\n * @param st Resampler state\n */\nint speex_resampler_reset_mem(SpeexResamplerState *st);\n\n/** Returns the English meaning for an error code\n * @param err Error code\n * @return English string\n */\nconst char *speex_resampler_strerror(int err);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif\n"
  },
  {
    "path": "ThirdParty/libopusenc/src/stack_alloc.h",
    "content": "/* Copyright (C) 2002 Jean-Marc Valin */\n/**\n   @file stack_alloc.h\n   @brief Temporary memory allocation on stack\n*/\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   - Neither the name of the Xiph.org Foundation nor the names of its\n   contributors may be used to endorse or promote products derived from\n   this software without specific prior written permission.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR\n   CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifndef STACK_ALLOC_H\n#define STACK_ALLOC_H\n\n#ifdef USE_ALLOCA\n# ifdef WIN32\n#  include <malloc.h>\n# else\n#  ifdef HAVE_ALLOCA_H\n#   include <alloca.h>\n#  else\n#   include <stdlib.h>\n#  endif\n# endif\n#endif\n\n/**\n * @def ALIGN(stack, size)\n *\n * Aligns the stack to a 'size' boundary\n *\n * @param stack Stack\n * @param size  New size boundary\n */\n\n/**\n * @def PUSH(stack, size, type)\n *\n * Allocates 'size' elements of type 'type' on the stack\n *\n * @param stack Stack\n * @param size  Number of elements\n * @param type  Type of element\n */\n\n/**\n * @def VARDECL(var)\n *\n * Declare variable on stack\n *\n * @param var Variable to declare\n */\n\n/**\n * @def ALLOC(var, size, type)\n *\n * Allocate 'size' elements of 'type' on stack\n *\n * @param var  Name of variable to allocate\n * @param size Number of elements\n * @param type Type of element\n */\n\n#ifdef ENABLE_VALGRIND\n\n#include <valgrind/memcheck.h>\n\n#define ALIGN(stack, size) ((stack) += ((size) - (long)(stack)) & ((size) - 1))\n\n#define PUSH(stack, size, type) (VALGRIND_MAKE_NOACCESS(stack, 1000),ALIGN((stack),sizeof(type)),VALGRIND_MAKE_WRITABLE(stack, ((size)*sizeof(type))),(stack)+=((size)*sizeof(type)),(type*)((stack)-((size)*sizeof(type))))\n\n#else\n\n#define ALIGN(stack, size) ((stack) += ((size) - (long)(stack)) & ((size) - 1))\n\n#define PUSH(stack, size, type) (ALIGN((stack),sizeof(type)),(stack)+=((size)*sizeof(type)),(type*)((stack)-((size)*sizeof(type))))\n\n#endif\n\n#if defined(VAR_ARRAYS)\n#define VARDECL(var)\n#define ALLOC(var, size, type) type var[size]\n#elif defined(USE_ALLOCA)\n#define VARDECL(var) var\n#define ALLOC(var, size, type) var = alloca(sizeof(type)*(size))\n#else\n#define VARDECL(var) var\n#define ALLOC(var, size, type) var = PUSH(stack, size, type)\n#endif\n\n\n#endif\n"
  },
  {
    "path": "ThirdParty/libopusenc/win32/config.h",
    "content": "#ifndef CONFIG_H\n#define CONFIG_H\n\n#define OPE_BUILD            1\n#define PACKAGE_NAME \t\"dwa\"\n#define PACKAGE_VERSION \"dwadaw\"\n#endif /* CONFIG_H */\n"
  },
  {
    "path": "ThirdParty/loopback/cleanup.h",
    "content": "// cleanup.h\n\nclass AudioClientStopOnExit {\npublic:\n    AudioClientStopOnExit(IAudioClient *p) : m_p(p) {}\n    ~AudioClientStopOnExit() {\n        HRESULT hr = m_p->Stop();\n        if (FAILED(hr)) {\n            ERR(L\"IAudioClient::Stop failed: hr = 0x%08x\", hr);\n        }\n    }\n\nprivate:\n    IAudioClient *m_p;\n};\n\nclass AvRevertMmThreadCharacteristicsOnExit {\npublic:\n    AvRevertMmThreadCharacteristicsOnExit(HANDLE hTask) : m_hTask(hTask) {}\n    ~AvRevertMmThreadCharacteristicsOnExit() {\n        if (!AvRevertMmThreadCharacteristics(m_hTask)) {\n            ERR(L\"AvRevertMmThreadCharacteristics failed: last error is %d\", GetLastError());\n        }\n    }\nprivate:\n    HANDLE m_hTask;\n};\n\nclass CancelWaitableTimerOnExit {\npublic:\n    CancelWaitableTimerOnExit(HANDLE h) : m_h(h) {}\n    ~CancelWaitableTimerOnExit() {\n        if (!CancelWaitableTimer(m_h)) {\n            ERR(L\"CancelWaitableTimer failed: last error is %d\", GetLastError());\n        }\n    }\nprivate:\n    HANDLE m_h;\n};\n\nclass CloseHandleOnExit {\npublic:\n    CloseHandleOnExit(HANDLE h) : m_h(h) {}\n    ~CloseHandleOnExit() {\n        if (!CloseHandle(m_h)) {\n            ERR(L\"CloseHandle failed: last error is %d\", GetLastError());\n        }\n    }\n\nprivate:\n    HANDLE m_h;\n};\n\nclass CoTaskMemFreeOnExit {\npublic:\n    CoTaskMemFreeOnExit(PVOID p) : m_p(p) {}\n    ~CoTaskMemFreeOnExit() {\n        CoTaskMemFree(m_p);\n    }\n\nprivate:\n    PVOID m_p;\n};\n\nclass CoUninitializeOnExit {\npublic:\n    ~CoUninitializeOnExit() {\n        CoUninitialize();\n    }\n};\n\nclass PropVariantClearOnExit {\npublic:\n    PropVariantClearOnExit(PROPVARIANT *p) : m_p(p) {}\n    ~PropVariantClearOnExit() {\n        HRESULT hr = PropVariantClear(m_p);\n        if (FAILED(hr)) {\n            ERR(L\"PropVariantClear failed: hr = 0x%08x\", hr);\n        }\n    }\n\nprivate:\n    PROPVARIANT *m_p;\n};\n\nclass ReleaseOnExit {\npublic:\n    ReleaseOnExit(IUnknown *p) : m_p(p) {}\n    ~ReleaseOnExit() {\n        m_p->Release();\n    }\n\nprivate:\n    IUnknown *m_p;\n};\n\nclass SetEventOnExit {\npublic:\n    SetEventOnExit(HANDLE h) : m_h(h) {}\n    ~SetEventOnExit() {\n        if (!SetEvent(m_h)) {\n            ERR(L\"SetEvent failed: last error is %d\", GetLastError());\n        }\n    }\nprivate:\n    HANDLE m_h;\n};\n\nclass WaitForSingleObjectOnExit {\npublic:\n    WaitForSingleObjectOnExit(HANDLE h) : m_h(h) {}\n    ~WaitForSingleObjectOnExit() {\n        DWORD dwWaitResult = WaitForSingleObject(m_h, INFINITE);\n        if (WAIT_OBJECT_0 != dwWaitResult) {\n            ERR(L\"WaitForSingleObject returned unexpected result 0x%08x, last error is %d\", dwWaitResult, GetLastError());\n        }\n    }\n\nprivate:\n    HANDLE m_h;\n};"
  },
  {
    "path": "ThirdParty/loopback/common.h",
    "content": "// common.h\n\n#include <stdio.h>\n#include <windows.h>\n#include <mmsystem.h>\n#include <mmdeviceapi.h>\n#include <audioclient.h>\n#include <avrt.h>\n#include <functiondiscoverykeys_devpkey.h>\n\n#include \"log.h\"\n#include \"cleanup.h\"\n#include \"prefs.h\"\n#include \"loopback-capture.h\"\n"
  },
  {
    "path": "ThirdParty/loopback/guid.cpp",
    "content": "// guid.cpp\n\n#include <initguid.h>\n#include \"common.h\""
  },
  {
    "path": "ThirdParty/loopback/log.h",
    "content": "// log.h\n\n#define LOG(format, ...) wprintf(format L\"\\n\", __VA_ARGS__)\n#define ERR(format, ...) LOG(L\"Error: \" format, __VA_ARGS__)\n"
  },
  {
    "path": "ThirdParty/loopback/loopback-capture.cpp",
    "content": "// loopback-capture.cpp\n\n#include \"common.h\"\n\nHRESULT LoopbackCapture(\n    IMMDevice *pMMDevice,\n    HMMIO hFile,\n    bool bInt16,\n    HANDLE hStartedEvent,\n    HANDLE hStopEvent,\n    PUINT32 pnFrames\n);\n\nHRESULT WriteWaveHeader(HMMIO hFile, LPCWAVEFORMATEX pwfx, MMCKINFO *pckRIFF, MMCKINFO *pckData);\nHRESULT FinishWaveFile(HMMIO hFile, MMCKINFO *pckRIFF, MMCKINFO *pckData);\n\nDWORD WINAPI LoopbackCaptureThreadFunction(LPVOID pContext) {\n    LoopbackCaptureThreadFunctionArguments *pArgs = (LoopbackCaptureThreadFunctionArguments*)pContext;\n\n    pArgs->hr = CoInitialize(NULL);\n    if (FAILED(pArgs->hr)) {\n        ERR(L\"CoInitialize failed: hr = 0x%08x\", pArgs->hr);\n        return 0;\n    }\n    CoUninitializeOnExit cuoe;\n\n    pArgs->hr = LoopbackCapture(\n        pArgs->pMMDevice,\n        pArgs->hFile,\n        pArgs->bInt16,\n        pArgs->hStartedEvent,\n        pArgs->hStopEvent,\n        &pArgs->nFrames\n    );\n\n    return 0;\n}\n\nHRESULT LoopbackCapture(\n    IMMDevice *pMMDevice,\n    HMMIO hFile,\n    bool bInt16,\n    HANDLE hStartedEvent,\n    HANDLE hStopEvent,\n    PUINT32 pnFrames\n) {\n    HRESULT hr;\n\n    // activate an IAudioClient\n    IAudioClient *pAudioClient;\n    hr = pMMDevice->Activate(\n        __uuidof(IAudioClient),\n        CLSCTX_ALL, NULL,\n        (void**)&pAudioClient\n    );\n    if (FAILED(hr)) {\n        ERR(L\"IMMDevice::Activate(IAudioClient) failed: hr = 0x%08x\", hr);\n        return hr;\n    }\n    ReleaseOnExit releaseAudioClient(pAudioClient);\n    \n    // get the default device periodicity\n    REFERENCE_TIME hnsDefaultDevicePeriod;\n    hr = pAudioClient->GetDevicePeriod(&hnsDefaultDevicePeriod, NULL);\n    if (FAILED(hr)) {\n        ERR(L\"IAudioClient::GetDevicePeriod failed: hr = 0x%08x\", hr);\n        return hr;\n    }\n\n    // get the default device format\n    WAVEFORMATEX *pwfx;\n    hr = pAudioClient->GetMixFormat(&pwfx);\n    if (FAILED(hr)) {\n        ERR(L\"IAudioClient::GetMixFormat failed: hr = 0x%08x\", hr);\n        return hr;\n    }\n    CoTaskMemFreeOnExit freeMixFormat(pwfx);\n\n    if (bInt16) {\n        // coerce int-16 wave format\n        // can do this in-place since we're not changing the size of the format\n        // also, the engine will auto-convert from float to int for us\n        switch (pwfx->wFormatTag) {\n            case WAVE_FORMAT_IEEE_FLOAT:\n                pwfx->wFormatTag = WAVE_FORMAT_PCM;\n                pwfx->wBitsPerSample = 16;\n                pwfx->nBlockAlign = pwfx->nChannels * pwfx->wBitsPerSample / 8;\n                pwfx->nAvgBytesPerSec = pwfx->nBlockAlign * pwfx->nSamplesPerSec;\n                break;\n\n            case WAVE_FORMAT_EXTENSIBLE:\n                {\n                    // naked scope for case-local variable\n                    PWAVEFORMATEXTENSIBLE pEx = reinterpret_cast<PWAVEFORMATEXTENSIBLE>(pwfx);\n                    if (IsEqualGUID(KSDATAFORMAT_SUBTYPE_IEEE_FLOAT, pEx->SubFormat)) {\n                        pEx->SubFormat = KSDATAFORMAT_SUBTYPE_PCM;\n                        pEx->Samples.wValidBitsPerSample = 16;\n                        pwfx->wBitsPerSample = 16;\n                        pwfx->nBlockAlign = pwfx->nChannels * pwfx->wBitsPerSample / 8;\n                        pwfx->nAvgBytesPerSec = pwfx->nBlockAlign * pwfx->nSamplesPerSec;\n                    } else {\n                        ERR(L\"%s\", L\"Don't know how to coerce mix format to int-16\");\n                        return E_UNEXPECTED;\n                    }\n                }\n                break;\n\n            default:\n                ERR(L\"Don't know how to coerce WAVEFORMATEX with wFormatTag = 0x%08x to int-16\", pwfx->wFormatTag);\n                return E_UNEXPECTED;\n        }\n    }\n\n    MMCKINFO ckRIFF = {0};\n    MMCKINFO ckData = {0};\n    hr = WriteWaveHeader(hFile, pwfx, &ckRIFF, &ckData);\n    if (FAILED(hr)) {\n        // WriteWaveHeader does its own logging\n        return hr;\n    }\n\n    // create a periodic waitable timer\n    HANDLE hWakeUp = CreateWaitableTimer(NULL, FALSE, NULL);\n    if (NULL == hWakeUp) {\n        DWORD dwErr = GetLastError();\n        ERR(L\"CreateWaitableTimer failed: last error = %u\", dwErr);\n        return HRESULT_FROM_WIN32(dwErr);\n    }\n    CloseHandleOnExit closeWakeUp(hWakeUp);\n\n    UINT32 nBlockAlign = pwfx->nBlockAlign;\n    *pnFrames = 0;\n    \n    // call IAudioClient::Initialize\n    // note that AUDCLNT_STREAMFLAGS_LOOPBACK and AUDCLNT_STREAMFLAGS_EVENTCALLBACK\n    // do not work together...\n    // the \"data ready\" event never gets set\n    // so we're going to do a timer-driven loop\n    hr = pAudioClient->Initialize(\n        AUDCLNT_SHAREMODE_SHARED,\n        AUDCLNT_STREAMFLAGS_LOOPBACK,\n        0, 0, pwfx, 0\n    );\n    if (FAILED(hr)) {\n        ERR(L\"IAudioClient::Initialize failed: hr = 0x%08x\", hr);\n        return hr;\n    }\n\n    // activate an IAudioCaptureClient\n    IAudioCaptureClient *pAudioCaptureClient;\n    hr = pAudioClient->GetService(\n        __uuidof(IAudioCaptureClient),\n        (void**)&pAudioCaptureClient\n    );\n    if (FAILED(hr)) {\n        ERR(L\"IAudioClient::GetService(IAudioCaptureClient) failed: hr = 0x%08x\", hr);\n        return hr;\n    }\n    ReleaseOnExit releaseAudioCaptureClient(pAudioCaptureClient);\n    \n    // register with MMCSS\n    DWORD nTaskIndex = 0;\n    HANDLE hTask = AvSetMmThreadCharacteristics(L\"Audio\", &nTaskIndex);\n    if (NULL == hTask) {\n        DWORD dwErr = GetLastError();\n        ERR(L\"AvSetMmThreadCharacteristics failed: last error = %u\", dwErr);\n        return HRESULT_FROM_WIN32(dwErr);\n    }\n    AvRevertMmThreadCharacteristicsOnExit unregisterMmcss(hTask);\n\n    // set the waitable timer\n    LARGE_INTEGER liFirstFire;\n    liFirstFire.QuadPart = -hnsDefaultDevicePeriod / 2; // negative means relative time\n    LONG lTimeBetweenFires = (LONG)hnsDefaultDevicePeriod / 2 / (10 * 1000); // convert to milliseconds\n    BOOL bOK = SetWaitableTimer(\n        hWakeUp,\n        &liFirstFire,\n        lTimeBetweenFires,\n        NULL, NULL, FALSE\n    );\n    if (!bOK) {\n        DWORD dwErr = GetLastError();\n        ERR(L\"SetWaitableTimer failed: last error = %u\", dwErr);\n        return HRESULT_FROM_WIN32(dwErr);\n    }\n    CancelWaitableTimerOnExit cancelWakeUp(hWakeUp);\n    \n    // call IAudioClient::Start\n    hr = pAudioClient->Start();\n    if (FAILED(hr)) {\n        ERR(L\"IAudioClient::Start failed: hr = 0x%08x\", hr);\n        return hr;\n    }\n    AudioClientStopOnExit stopAudioClient(pAudioClient);\n\n    SetEvent(hStartedEvent);\n    \n    // loopback capture loop\n    HANDLE waitArray[2] = { hStopEvent, hWakeUp };\n    DWORD dwWaitResult;\n\n    bool bDone = false;\n    bool bFirstPacket = true;\n    for (UINT32 nPasses = 0; !bDone; nPasses++) {\n        // drain data while it is available\n        UINT32 nNextPacketSize;\n        for (\n            hr = pAudioCaptureClient->GetNextPacketSize(&nNextPacketSize);\n            SUCCEEDED(hr) && nNextPacketSize > 0;\n            hr = pAudioCaptureClient->GetNextPacketSize(&nNextPacketSize)\n        ) {\n            // get the captured data\n            BYTE *pData;\n            UINT32 nNumFramesToRead;\n            DWORD dwFlags;\n\n            hr = pAudioCaptureClient->GetBuffer(\n                &pData,\n                &nNumFramesToRead,\n                &dwFlags,\n                NULL,\n                NULL\n                );\n            if (FAILED(hr)) {\n                ERR(L\"IAudioCaptureClient::GetBuffer failed on pass %u after %u frames: hr = 0x%08x\", nPasses, *pnFrames, hr);\n                return hr;\n            }\n\n            if (bFirstPacket && AUDCLNT_BUFFERFLAGS_DATA_DISCONTINUITY == dwFlags) {\n                LOG(L\"%s\", L\"Probably spurious glitch reported on first packet\");\n            } else if (0 != dwFlags) {\n                LOG(L\"IAudioCaptureClient::GetBuffer set flags to 0x%08x on pass %u after %u frames\", dwFlags, nPasses, *pnFrames);\n                return E_UNEXPECTED;\n            }\n\n            if (0 == nNumFramesToRead) {\n                ERR(L\"IAudioCaptureClient::GetBuffer said to read 0 frames on pass %u after %u frames\", nPasses, *pnFrames);\n                return E_UNEXPECTED;\n            }\n\n            LONG lBytesToWrite = nNumFramesToRead * nBlockAlign;\n#pragma prefast(suppress: __WARNING_INCORRECT_ANNOTATION, \"IAudioCaptureClient::GetBuffer SAL annotation implies a 1-byte buffer\")\n            LONG lBytesWritten = mmioWrite(hFile, reinterpret_cast<PCHAR>(pData), lBytesToWrite);\n            if (lBytesToWrite != lBytesWritten) {\n                ERR(L\"mmioWrite wrote %u bytes on pass %u after %u frames: expected %u bytes\", lBytesWritten, nPasses, *pnFrames, lBytesToWrite);\n                return E_UNEXPECTED;\n            }\n            *pnFrames += nNumFramesToRead;\n\n            hr = pAudioCaptureClient->ReleaseBuffer(nNumFramesToRead);\n            if (FAILED(hr)) {\n                ERR(L\"IAudioCaptureClient::ReleaseBuffer failed on pass %u after %u frames: hr = 0x%08x\", nPasses, *pnFrames, hr);\n                return hr;\n            }\n\n            bFirstPacket = false;\n        }\n\n        if (FAILED(hr)) {\n            ERR(L\"IAudioCaptureClient::GetNextPacketSize failed on pass %u after %u frames: hr = 0x%08x\", nPasses, *pnFrames, hr);\n            return hr;\n        }\n\n        dwWaitResult = WaitForMultipleObjects(\n            ARRAYSIZE(waitArray), waitArray,\n            FALSE, INFINITE\n        );\n\n        if (WAIT_OBJECT_0 == dwWaitResult) {\n            LOG(L\"Received stop event after %u passes and %u frames\", nPasses, *pnFrames);\n            bDone = true;\n            continue; // exits loop\n        }\n\n        if (WAIT_OBJECT_0 + 1 != dwWaitResult) {\n            ERR(L\"Unexpected WaitForMultipleObjects return value %u on pass %u after %u frames\", dwWaitResult, nPasses, *pnFrames);\n            return E_UNEXPECTED;\n        }\n    } // capture loop\n\n    hr = FinishWaveFile(hFile, &ckData, &ckRIFF);\n    if (FAILED(hr)) {\n        // FinishWaveFile does it's own logging\n        return hr;\n    }\n    \n    return hr;\n}\n\nHRESULT WriteWaveHeader(HMMIO hFile, LPCWAVEFORMATEX pwfx, MMCKINFO *pckRIFF, MMCKINFO *pckData) {\n    MMRESULT result;\n\n    // make a RIFF/WAVE chunk\n    pckRIFF->ckid = MAKEFOURCC('R', 'I', 'F', 'F');\n    pckRIFF->fccType = MAKEFOURCC('W', 'A', 'V', 'E');\n\n    result = mmioCreateChunk(hFile, pckRIFF, MMIO_CREATERIFF);\n    if (MMSYSERR_NOERROR != result) {\n        ERR(L\"mmioCreateChunk(\\\"RIFF/WAVE\\\") failed: MMRESULT = 0x%08x\", result);\n        return E_FAIL;\n    }\n    \n    // make a 'fmt ' chunk (within the RIFF/WAVE chunk)\n    MMCKINFO chunk;\n    chunk.ckid = MAKEFOURCC('f', 'm', 't', ' ');\n    result = mmioCreateChunk(hFile, &chunk, 0);\n    if (MMSYSERR_NOERROR != result) {\n        ERR(L\"mmioCreateChunk(\\\"fmt \\\") failed: MMRESULT = 0x%08x\", result);\n        return E_FAIL;\n    }\n\n    // write the WAVEFORMATEX data to it\n    LONG lBytesInWfx = sizeof(WAVEFORMATEX) + pwfx->cbSize;\n    LONG lBytesWritten =\n        mmioWrite(\n            hFile,\n            reinterpret_cast<PCHAR>(const_cast<LPWAVEFORMATEX>(pwfx)),\n            lBytesInWfx\n        );\n    if (lBytesWritten != lBytesInWfx) {\n        ERR(L\"mmioWrite(fmt data) wrote %u bytes; expected %u bytes\", lBytesWritten, lBytesInWfx);\n        return E_FAIL;\n    }\n\n    // ascend from the 'fmt ' chunk\n    result = mmioAscend(hFile, &chunk, 0);\n    if (MMSYSERR_NOERROR != result) {\n        ERR(L\"mmioAscend(\\\"fmt \\\" failed: MMRESULT = 0x%08x\", result);\n        return E_FAIL;\n    }\n    \n    // make a 'fact' chunk whose data is (DWORD)0\n    chunk.ckid = MAKEFOURCC('f', 'a', 'c', 't');\n    result = mmioCreateChunk(hFile, &chunk, 0);\n    if (MMSYSERR_NOERROR != result) {\n        ERR(L\"mmioCreateChunk(\\\"fmt \\\") failed: MMRESULT = 0x%08x\", result);\n        return E_FAIL;\n    }\n\n    // write (DWORD)0 to it\n    // this is cleaned up later\n    DWORD frames = 0;\n    lBytesWritten = mmioWrite(hFile, reinterpret_cast<PCHAR>(&frames), sizeof(frames));\n    if (lBytesWritten != sizeof(frames)) {\n        ERR(L\"mmioWrite(fact data) wrote %u bytes; expected %u bytes\", lBytesWritten, (UINT32)sizeof(frames));\n        return E_FAIL;\n    }\n\n    // ascend from the 'fact' chunk\n    result = mmioAscend(hFile, &chunk, 0);\n    if (MMSYSERR_NOERROR != result) {\n        ERR(L\"mmioAscend(\\\"fact\\\" failed: MMRESULT = 0x%08x\", result);\n        return E_FAIL;\n    }\n\n    // make a 'data' chunk and leave the data pointer there\n    pckData->ckid = MAKEFOURCC('d', 'a', 't', 'a');\n    result = mmioCreateChunk(hFile, pckData, 0);\n    if (MMSYSERR_NOERROR != result) {\n        ERR(L\"mmioCreateChunk(\\\"data\\\") failed: MMRESULT = 0x%08x\", result);\n        return E_FAIL;\n    }\n\n    return S_OK;\n}\n\nHRESULT FinishWaveFile(HMMIO hFile, MMCKINFO *pckRIFF, MMCKINFO *pckData) {\n    MMRESULT result;\n\n    result = mmioAscend(hFile, pckData, 0);\n    if (MMSYSERR_NOERROR != result) {\n        ERR(L\"mmioAscend(\\\"data\\\" failed: MMRESULT = 0x%08x\", result);\n        return E_FAIL;\n    }\n\n    result = mmioAscend(hFile, pckRIFF, 0);\n    if (MMSYSERR_NOERROR != result) {\n        ERR(L\"mmioAscend(\\\"RIFF/WAVE\\\" failed: MMRESULT = 0x%08x\", result);\n        return E_FAIL;\n    }\n\n    return S_OK;    \n}\n"
  },
  {
    "path": "ThirdParty/loopback/loopback-capture.h",
    "content": "// loopback-capture.h\n\n// call CreateThread on this function\n// feed it the address of a LoopbackCaptureThreadFunctionArguments\n// it will capture via loopback from the IMMDevice\n// and dump output to the HMMIO\n// until the stop event is set\n// any failures will be propagated back via hr\n\nstruct LoopbackCaptureThreadFunctionArguments {\n    IMMDevice *pMMDevice;\n    bool bInt16;\n    HMMIO hFile;\n    HANDLE hStartedEvent;\n    HANDLE hStopEvent;\n    UINT32 nFrames;\n    HRESULT hr;\n};\n\nDWORD WINAPI LoopbackCaptureThreadFunction(LPVOID pContext);\n"
  },
  {
    "path": "ThirdParty/loopback/main.cpp",
    "content": "// main.cpp\n\n#include \"common.h\"\n\nint do_everything(int argc, LPCWSTR argv[]);\n\nint _cdecl wmain(int argc, LPCWSTR argv[]) {\n    HRESULT hr = S_OK;\n\n    hr = CoInitialize(NULL);\n    if (FAILED(hr)) {\n        ERR(L\"CoInitialize failed: hr = 0x%08x\", hr);\n        return -__LINE__;\n    }\n    CoUninitializeOnExit cuoe;\n\n    return do_everything(argc, argv);\n}\n\nint do_everything(int argc, LPCWSTR argv[]) {\n    HRESULT hr = S_OK;\n\n    // parse command line\n    CPrefs prefs(argc, argv, hr);\n    if (FAILED(hr)) {\n        ERR(L\"CPrefs::CPrefs constructor failed: hr = 0x%08x\", hr);\n        return -__LINE__;\n    }\n    if (S_FALSE == hr) {\n        // nothing to do\n        return 0;\n    }\n\n    // create a \"loopback capture has started\" event\n    HANDLE hStartedEvent = CreateEvent(NULL, FALSE, FALSE, NULL);\n    if (NULL == hStartedEvent) {\n        ERR(L\"CreateEvent failed: last error is %u\", GetLastError());\n        return -__LINE__;\n    }\n    CloseHandleOnExit closeStartedEvent(hStartedEvent);\n\n    // create a \"stop capturing now\" event\n    HANDLE hStopEvent = CreateEvent(NULL, FALSE, FALSE, NULL);\n    if (NULL == hStopEvent) {\n        ERR(L\"CreateEvent failed: last error is %u\", GetLastError());\n        return -__LINE__;\n    }\n    CloseHandleOnExit closeStopEvent(hStopEvent);\n\n    // create arguments for loopback capture thread\n    LoopbackCaptureThreadFunctionArguments threadArgs;\n    threadArgs.hr = E_UNEXPECTED; // thread will overwrite this\n    threadArgs.pMMDevice = prefs.m_pMMDevice;\n    threadArgs.bInt16 = prefs.m_bInt16;\n    threadArgs.hFile = prefs.m_hFile;\n    threadArgs.hStartedEvent = hStartedEvent;\n    threadArgs.hStopEvent = hStopEvent;\n    threadArgs.nFrames = 0;\n\n    HANDLE hThread = CreateThread(\n        NULL, 0,\n        LoopbackCaptureThreadFunction, &threadArgs,\n        0, NULL\n    );\n    if (NULL == hThread) {\n        ERR(L\"CreateThread failed: last error is %u\", GetLastError());\n        return -__LINE__;\n    }\n    CloseHandleOnExit closeThread(hThread);\n\n    // wait for either capture to start or the thread to end\n    HANDLE waitArray[2] = { hStartedEvent, hThread };\n    DWORD dwWaitResult;\n    dwWaitResult = WaitForMultipleObjects(\n        ARRAYSIZE(waitArray), waitArray,\n        FALSE, INFINITE\n    );\n\n    if (WAIT_OBJECT_0 + 1 == dwWaitResult) {\n        ERR(L\"Thread aborted before starting to loopback capture: hr = 0x%08x\", threadArgs.hr);\n        return -__LINE__;\n    }\n\n    if (WAIT_OBJECT_0 != dwWaitResult) {\n        ERR(L\"Unexpected WaitForMultipleObjects return value %u\", dwWaitResult);\n        return -__LINE__;\n    }\n\n    // at this point capture is running\n    // wait for the user to press a key or for capture to error out\n    {\n        WaitForSingleObjectOnExit waitForThread(hThread);\n        SetEventOnExit setStopEvent(hStopEvent);\n        HANDLE hStdIn = GetStdHandle(STD_INPUT_HANDLE);\n\n        if (INVALID_HANDLE_VALUE == hStdIn) {\n            ERR(L\"GetStdHandle returned INVALID_HANDLE_VALUE: last error is %u\", GetLastError());\n            return -__LINE__;\n        }\n\n        LOG(L\"%s\", L\"Press Enter to quit...\");\n\n        HANDLE rhHandles[2] = { hThread, hStdIn };\n\n        bool bKeepWaiting = true;\n        while (bKeepWaiting) {\n\n            dwWaitResult = WaitForMultipleObjects(2, rhHandles, FALSE, INFINITE);\n\n            switch (dwWaitResult) {\n\n            case WAIT_OBJECT_0: // hThread\n                ERR(L\"%s\", L\"The thread terminated early - something bad happened\");\n                bKeepWaiting = false;\n                break;\n\n            case WAIT_OBJECT_0 + 1: // hStdIn\n                // see if any of them was an Enter key-up event\n                INPUT_RECORD rInput[128];\n                DWORD nEvents;\n                if (!ReadConsoleInput(hStdIn, rInput, ARRAYSIZE(rInput), &nEvents)) {\n                    ERR(L\"ReadConsoleInput failed: last error is %u\", GetLastError());\n                    bKeepWaiting = false;\n                }\n                else {\n                    for (DWORD i = 0; i < nEvents; i++) {\n                        if (\n                            KEY_EVENT == rInput[i].EventType &&\n                            VK_RETURN == rInput[i].Event.KeyEvent.wVirtualKeyCode &&\n                            !rInput[i].Event.KeyEvent.bKeyDown\n                            ) {\n                            LOG(L\"%s\", L\"Stopping capture...\");\n                            bKeepWaiting = false;\n                            break;\n                        }\n                    }\n                    // if none of them were Enter key-up events,\n                    // continue waiting\n                }\n                break;\n\n            default:\n                ERR(L\"WaitForMultipleObjects returned unexpected value 0x%08x\", dwWaitResult);\n                bKeepWaiting = false;\n                break;\n            } // switch\n        } // while\n    } // naked scope\n\n    // at this point the thread is definitely finished\n\n    DWORD exitCode;\n    if (!GetExitCodeThread(hThread, &exitCode)) {\n        ERR(L\"GetExitCodeThread failed: last error is %u\", GetLastError());\n        return -__LINE__;\n    }\n\n    if (0 != exitCode) {\n        ERR(L\"Loopback capture thread exit code is %u; expected 0\", exitCode);\n        return -__LINE__;\n    }\n\n    if (S_OK != threadArgs.hr) {\n        ERR(L\"Thread HRESULT is 0x%08x\", threadArgs.hr);\n        return -__LINE__;\n    }\n\n    // everything went well... fixup the fact chunk in the file\n    MMRESULT result = mmioClose(prefs.m_hFile, 0);\n    prefs.m_hFile = NULL;\n    if (MMSYSERR_NOERROR != result) {\n        ERR(L\"mmioClose failed: MMSYSERR = %u\", result);\n        return -__LINE__;\n    }\n\n    // reopen the file in read/write mode\n    MMIOINFO mi = {0};\n    prefs.m_hFile = mmioOpen(const_cast<LPWSTR>(prefs.m_szFilename), &mi, MMIO_READWRITE);\n    if (NULL == prefs.m_hFile) {\n        ERR(L\"mmioOpen(\\\"%ls\\\", ...) failed. wErrorRet == %u\", prefs.m_szFilename, mi.wErrorRet);\n        return -__LINE__;\n    }\n\n    // descend into the RIFF/WAVE chunk\n    MMCKINFO ckRIFF = {0};\n    ckRIFF.ckid = MAKEFOURCC('W', 'A', 'V', 'E'); // this is right for mmioDescend\n    result = mmioDescend(prefs.m_hFile, &ckRIFF, NULL, MMIO_FINDRIFF);\n    if (MMSYSERR_NOERROR != result) {\n        ERR(L\"mmioDescend(\\\"WAVE\\\") failed: MMSYSERR = %u\", result);\n        return -__LINE__;\n    }\n\n    // descend into the fact chunk\n    MMCKINFO ckFact = {0};\n    ckFact.ckid = MAKEFOURCC('f', 'a', 'c', 't');\n    result = mmioDescend(prefs.m_hFile, &ckFact, &ckRIFF, MMIO_FINDCHUNK);\n    if (MMSYSERR_NOERROR != result) {\n        ERR(L\"mmioDescend(\\\"fact\\\") failed: MMSYSERR = %u\", result);\n        return -__LINE__;\n    }\n\n    // write the correct data to the fact chunk\n    LONG lBytesWritten = mmioWrite(\n        prefs.m_hFile,\n        reinterpret_cast<PCHAR>(&threadArgs.nFrames),\n        sizeof(threadArgs.nFrames)\n    );\n    if (lBytesWritten != sizeof(threadArgs.nFrames)) {\n        ERR(L\"Updating the fact chunk wrote %u bytes; expected %u\", lBytesWritten, (UINT32)sizeof(threadArgs.nFrames));\n        return -__LINE__;\n    }\n\n    // ascend out of the fact chunk\n    result = mmioAscend(prefs.m_hFile, &ckFact, 0);\n    if (MMSYSERR_NOERROR != result) {\n        ERR(L\"mmioAscend(\\\"fact\\\") failed: MMSYSERR = %u\", result);\n        return -__LINE__;\n    }\n\n    // let prefs' destructor call mmioClose\n    \n    return 0;\n}\n"
  },
  {
    "path": "ThirdParty/loopback/prefs.cpp",
    "content": "// prefs.cpp\n\n#include \"common.h\"\n\n#define DEFAULT_FILE L\"loopback-capture.wav\"\n\nvoid usage(LPCWSTR exe);\nHRESULT get_default_device(IMMDevice **ppMMDevice);\nHRESULT list_devices();\nHRESULT get_specific_device(LPCWSTR szLongName, IMMDevice **ppMMDevice);\nHRESULT open_file(LPCWSTR szFileName, HMMIO *phFile);\n\nvoid usage(LPCWSTR exe) {\n    LOG(\n        L\"%ls -?\\n\"\n        L\"%ls --list-devices\\n\"\n        L\"%ls [--device \\\"Device long name\\\"] [--file \\\"file name\\\"] [--int-16]\\n\"\n        L\"\\n\"\n        L\"    -? prints this message.\\n\"\n        L\"    --list-devices displays the long names of all active playback devices.\\n\"\n        L\"    --device captures from the specified device (default if omitted)\\n\"\n        L\"    --file saves the output to a file (%ls if omitted)\\n\"\n        L\"    --int-16 attempts to coerce data to 16-bit integer format\",\n        exe, exe, exe, DEFAULT_FILE\n    );\n}\n\nCPrefs::CPrefs(int argc, LPCWSTR argv[], HRESULT &hr)\n: m_pMMDevice(NULL)\n, m_hFile(NULL)\n, m_bInt16(false)\n, m_pwfx(NULL)\n, m_szFilename(NULL)\n{\n    switch (argc) {\n        case 2:\n            if (0 == _wcsicmp(argv[1], L\"-?\") || 0 == _wcsicmp(argv[1], L\"/?\")) {\n                // print usage but don't actually capture\n                hr = S_FALSE;\n                usage(argv[0]);\n                return;\n            } else if (0 == _wcsicmp(argv[1], L\"--list-devices\")) {\n                // list the devices but don't actually capture\n                hr = list_devices();\n\n                // don't actually play\n                if (S_OK == hr) {\n                    hr = S_FALSE;\n                    return;\n                }\n            }\n        // intentional fallthrough\n        \n        default:\n            // loop through arguments and parse them\n            for (int i = 1; i < argc; i++) {\n                \n                // --device\n                if (0 == _wcsicmp(argv[i], L\"--device\")) {\n                    if (NULL != m_pMMDevice) {\n                        ERR(L\"%s\", L\"Only one --device switch is allowed\");\n                        hr = E_INVALIDARG;\n                        return;\n                    }\n\n                    if (i++ == argc) {\n                        ERR(L\"%s\", L\"--device switch requires an argument\");\n                        hr = E_INVALIDARG;\n                        return;\n                    }\n\n                    hr = get_specific_device(argv[i], &m_pMMDevice);\n                    if (FAILED(hr)) {\n                        return;\n                    }\n\n                    continue;\n                }\n\n                // --file\n                if (0 == _wcsicmp(argv[i], L\"--file\")) {\n                    if (NULL != m_szFilename) {\n                        ERR(L\"%s\", L\"Only one --file switch is allowed\");\n                        hr = E_INVALIDARG;\n                        return;\n                    }\n\n                    if (i++ == argc) {\n                        ERR(L\"%s\", L\"--file switch requires an argument\");\n                        hr = E_INVALIDARG;\n                        return;\n                    }\n\n                    m_szFilename = argv[i];\n                    continue;\n                }\n\n                // --int-16\n                if (0 == _wcsicmp(argv[i], L\"--int-16\")) {\n                    if (m_bInt16) {\n                        ERR(L\"%s\", L\"Only one --int-16 switch is allowed\");\n                        hr = E_INVALIDARG;\n                        return;\n                    }\n\n                    m_bInt16 = true;\n                    continue;\n                }\n\n                ERR(L\"Invalid argument %ls\", argv[i]);\n                hr = E_INVALIDARG;\n                return;\n            }\n\n            // open default device if not specified\n            if (NULL == m_pMMDevice) {\n                hr = get_default_device(&m_pMMDevice);\n                if (FAILED(hr)) {\n                    return;\n                }\n            }\n\n            // if no filename specified, use default\n            if (NULL == m_szFilename) {\n                m_szFilename = DEFAULT_FILE;\n            }\n\n            // open file\n            hr = open_file(m_szFilename, &m_hFile);\n            if (FAILED(hr)) {\n                return;\n            }\n    }\n}\n\nCPrefs::~CPrefs() {\n    if (NULL != m_pMMDevice) {\n        m_pMMDevice->Release();\n    }\n\n    if (NULL != m_hFile) {\n        mmioClose(m_hFile, 0);\n    }\n\n    if (NULL != m_pwfx) {\n        CoTaskMemFree(m_pwfx);\n    }\n}\n\nHRESULT get_default_device(IMMDevice **ppMMDevice) {\n    HRESULT hr = S_OK;\n    IMMDeviceEnumerator *pMMDeviceEnumerator;\n\n    // activate a device enumerator\n    hr = CoCreateInstance(\n        __uuidof(MMDeviceEnumerator), NULL, CLSCTX_ALL, \n        __uuidof(IMMDeviceEnumerator),\n        (void**)&pMMDeviceEnumerator\n    );\n    if (FAILED(hr)) {\n        ERR(L\"CoCreateInstance(IMMDeviceEnumerator) failed: hr = 0x%08x\", hr);\n        return hr;\n    }\n    ReleaseOnExit releaseMMDeviceEnumerator(pMMDeviceEnumerator);\n\n    // get the default render endpoint\n    hr = pMMDeviceEnumerator->GetDefaultAudioEndpoint(eRender, eConsole, ppMMDevice);\n    if (FAILED(hr)) {\n        ERR(L\"IMMDeviceEnumerator::GetDefaultAudioEndpoint failed: hr = 0x%08x\", hr);\n        return hr;\n    }\n\n    return S_OK;\n}\n\nHRESULT list_devices() {\n    HRESULT hr = S_OK;\n\n    // get an enumerator\n    IMMDeviceEnumerator *pMMDeviceEnumerator;\n\n    hr = CoCreateInstance(\n        __uuidof(MMDeviceEnumerator), NULL, CLSCTX_ALL, \n        __uuidof(IMMDeviceEnumerator),\n        (void**)&pMMDeviceEnumerator\n    );\n    if (FAILED(hr)) {\n        ERR(L\"CoCreateInstance(IMMDeviceEnumerator) failed: hr = 0x%08x\", hr);\n        return hr;\n    }\n    ReleaseOnExit releaseMMDeviceEnumerator(pMMDeviceEnumerator);\n\n    IMMDeviceCollection *pMMDeviceCollection;\n\n    // get all the active render endpoints\n    hr = pMMDeviceEnumerator->EnumAudioEndpoints(\n        eRender, DEVICE_STATE_ACTIVE, &pMMDeviceCollection\n    );\n    if (FAILED(hr)) {\n        ERR(L\"IMMDeviceEnumerator::EnumAudioEndpoints failed: hr = 0x%08x\", hr);\n        return hr;\n    }\n    ReleaseOnExit releaseMMDeviceCollection(pMMDeviceCollection);\n\n    UINT count;\n    hr = pMMDeviceCollection->GetCount(&count);\n    if (FAILED(hr)) {\n        ERR(L\"IMMDeviceCollection::GetCount failed: hr = 0x%08x\", hr);\n        return hr;\n    }\n    LOG(L\"Active render endpoints found: %u\", count);\n\n    for (UINT i = 0; i < count; i++) {\n        IMMDevice *pMMDevice;\n\n        // get the \"n\"th device\n        hr = pMMDeviceCollection->Item(i, &pMMDevice);\n        if (FAILED(hr)) {\n            ERR(L\"IMMDeviceCollection::Item failed: hr = 0x%08x\", hr);\n            return hr;\n        }\n        ReleaseOnExit releaseMMDevice(pMMDevice);\n\n        // open the property store on that device\n        IPropertyStore *pPropertyStore;\n        hr = pMMDevice->OpenPropertyStore(STGM_READ, &pPropertyStore);\n        if (FAILED(hr)) {\n            ERR(L\"IMMDevice::OpenPropertyStore failed: hr = 0x%08x\", hr);\n            return hr;\n        }\n        ReleaseOnExit releasePropertyStore(pPropertyStore);\n\n        // get the long name property\n        PROPVARIANT pv; PropVariantInit(&pv);\n        hr = pPropertyStore->GetValue(PKEY_Device_FriendlyName, &pv);\n        if (FAILED(hr)) {\n            ERR(L\"IPropertyStore::GetValue failed: hr = 0x%08x\", hr);\n            return hr;\n        }\n        PropVariantClearOnExit clearPv(&pv);\n\n        if (VT_LPWSTR != pv.vt) {\n            ERR(L\"PKEY_Device_FriendlyName variant type is %u - expected VT_LPWSTR\", pv.vt);\n            return E_UNEXPECTED;\n        }\n\n        LOG(L\"    %ls\", pv.pwszVal);\n    }    \n    \n    return S_OK;\n}\n\nHRESULT get_specific_device(LPCWSTR szLongName, IMMDevice **ppMMDevice) {\n    HRESULT hr = S_OK;\n\n    *ppMMDevice = NULL;\n    \n    // get an enumerator\n    IMMDeviceEnumerator *pMMDeviceEnumerator;\n\n    hr = CoCreateInstance(\n        __uuidof(MMDeviceEnumerator), NULL, CLSCTX_ALL, \n        __uuidof(IMMDeviceEnumerator),\n        (void**)&pMMDeviceEnumerator\n    );\n    if (FAILED(hr)) {\n        ERR(L\"CoCreateInstance(IMMDeviceEnumerator) failed: hr = 0x%08x\", hr);\n        return hr;\n    }\n    ReleaseOnExit releaseMMDeviceEnumerator(pMMDeviceEnumerator);\n\n    IMMDeviceCollection *pMMDeviceCollection;\n\n    // get all the active render endpoints\n    hr = pMMDeviceEnumerator->EnumAudioEndpoints(\n        eRender, DEVICE_STATE_ACTIVE, &pMMDeviceCollection\n    );\n    if (FAILED(hr)) {\n        ERR(L\"IMMDeviceEnumerator::EnumAudioEndpoints failed: hr = 0x%08x\", hr);\n        return hr;\n    }\n    ReleaseOnExit releaseMMDeviceCollection(pMMDeviceCollection);\n\n    UINT count;\n    hr = pMMDeviceCollection->GetCount(&count);\n    if (FAILED(hr)) {\n        ERR(L\"IMMDeviceCollection::GetCount failed: hr = 0x%08x\", hr);\n        return hr;\n    }\n\n    for (UINT i = 0; i < count; i++) {\n        IMMDevice *pMMDevice;\n\n        // get the \"n\"th device\n        hr = pMMDeviceCollection->Item(i, &pMMDevice);\n        if (FAILED(hr)) {\n            ERR(L\"IMMDeviceCollection::Item failed: hr = 0x%08x\", hr);\n            return hr;\n        }\n        ReleaseOnExit releaseMMDevice(pMMDevice);\n\n        // open the property store on that device\n        IPropertyStore *pPropertyStore;\n        hr = pMMDevice->OpenPropertyStore(STGM_READ, &pPropertyStore);\n        if (FAILED(hr)) {\n            ERR(L\"IMMDevice::OpenPropertyStore failed: hr = 0x%08x\", hr);\n            return hr;\n        }\n        ReleaseOnExit releasePropertyStore(pPropertyStore);\n\n        // get the long name property\n        PROPVARIANT pv; PropVariantInit(&pv);\n        hr = pPropertyStore->GetValue(PKEY_Device_FriendlyName, &pv);\n        if (FAILED(hr)) {\n            ERR(L\"IPropertyStore::GetValue failed: hr = 0x%08x\", hr);\n            return hr;\n        }\n        PropVariantClearOnExit clearPv(&pv);\n\n        if (VT_LPWSTR != pv.vt) {\n            ERR(L\"PKEY_Device_FriendlyName variant type is %u - expected VT_LPWSTR\", pv.vt);\n            return E_UNEXPECTED;\n        }\n\n        // is it a match?\n        if (0 == _wcsicmp(pv.pwszVal, szLongName)) {\n            // did we already find it?\n            if (NULL == *ppMMDevice) {\n                *ppMMDevice = pMMDevice;\n                pMMDevice->AddRef();\n            } else {\n                ERR(L\"Found (at least) two devices named %ls\", szLongName);\n                return E_UNEXPECTED;\n            }\n        }\n    }\n    \n    if (NULL == *ppMMDevice) {\n        ERR(L\"Could not find a device named %ls\", szLongName);\n        return HRESULT_FROM_WIN32(ERROR_NOT_FOUND);\n    }\n\n    return S_OK;\n}\n\nHRESULT open_file(LPCWSTR szFileName, HMMIO *phFile) {\n    MMIOINFO mi = {0};\n\n    *phFile = mmioOpen(\n        // some flags cause mmioOpen write to this buffer\n        // but not any that we're using\n        const_cast<LPWSTR>(szFileName),\n        &mi,\n        MMIO_WRITE | MMIO_CREATE\n    );\n\n    if (NULL == *phFile) {\n        ERR(L\"mmioOpen(\\\"%ls\\\", ...) failed. wErrorRet == %u\", szFileName, mi.wErrorRet);\n        return E_FAIL;\n    }\n\n    return S_OK;\n}\n"
  },
  {
    "path": "ThirdParty/loopback/prefs.h",
    "content": "// prefs.h\n\nclass CPrefs {\npublic:\n    IMMDevice *m_pMMDevice;\n    HMMIO m_hFile;\n    bool m_bInt16;\n    PWAVEFORMATEX m_pwfx;\n    LPCWSTR m_szFilename;\n\n    // set hr to S_FALSE to abort but return success\n    CPrefs(int argc, LPCWSTR argv[], HRESULT &hr);\n    ~CPrefs();\n\n};\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/AUTHORS",
    "content": "Jean-Marc Valin (jmvalin@jmvalin.ca)\nKoen Vos (koenvos74@gmail.com)\nTimothy Terriberry (tterribe@xiph.org)\nKarsten Vandborg Sorensen (karsten.vandborg.sorensen@skype.net)\nSoren Skak Jensen (ssjensen@gn.com)\nGregory Maxwell (greg@xiph.org)\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/COPYING",
    "content": "Copyright 2001-2011 Xiph.Org, Skype Limited, Octasic,\n                    Jean-Marc Valin, Timothy B. Terriberry,\n                    CSIRO, Gregory Maxwell, Mark Borgerding,\n                    Erik de Castro Lopo\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n\n- Redistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\n\n- Redistributions in binary form must reproduce the above copyright\nnotice, this list of conditions and the following disclaimer in the\ndocumentation and/or other materials provided with the distribution.\n\n- Neither the name of Internet Society, IETF or IETF Trust, nor the\nnames of specific contributors, may be used to endorse or promote\nproducts derived from this software without specific prior written\npermission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\nLIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\nA PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\nOR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\nEXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\nPROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\nPROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\nLIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\nNEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nOpus is subject to the royalty-free patent licenses which are\nspecified at:\n\nXiph.Org Foundation:\nhttps://datatracker.ietf.org/ipr/1524/\n\nMicrosoft Corporation:\nhttps://datatracker.ietf.org/ipr/1914/\n\nBroadcom Corporation:\nhttps://datatracker.ietf.org/ipr/1526/\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/INSTALL",
    "content": "Installation Instructions\n*************************\n\nCopyright (C) 1994-1996, 1999-2002, 2004-2013 Free Software Foundation,\nInc.\n\n   Copying and distribution of this file, with or without modification,\nare permitted in any medium without royalty provided the copyright\nnotice and this notice are preserved.  This file is offered as-is,\nwithout warranty of any kind.\n\nBasic Installation\n==================\n\n   Briefly, the shell command `./configure && make && make install'\nshould configure, build, and install this package.  The following\nmore-detailed instructions are generic; see the `README' file for\ninstructions specific to this package.  Some packages provide this\n`INSTALL' file but do not implement all of the features documented\nbelow.  The lack of an optional feature in a given package is not\nnecessarily a bug.  More recommendations for GNU packages can be found\nin *note Makefile Conventions: (standards)Makefile Conventions.\n\n   The `configure' shell script attempts to guess correct values for\nvarious system-dependent variables used during compilation.  It uses\nthose values to create a `Makefile' in each directory of the package.\nIt may also create one or more `.h' files containing system-dependent\ndefinitions.  Finally, it creates a shell script `config.status' that\nyou can run in the future to recreate the current configuration, and a\nfile `config.log' containing compiler output (useful mainly for\ndebugging `configure').\n\n   It can also use an optional file (typically called `config.cache'\nand enabled with `--cache-file=config.cache' or simply `-C') that saves\nthe results of its tests to speed up reconfiguring.  Caching is\ndisabled by default to prevent problems with accidental use of stale\ncache files.\n\n   If you need to do unusual things to compile the package, please try\nto figure out how `configure' could check whether to do them, and mail\ndiffs or instructions to the address given in the `README' so they can\nbe considered for the next release.  If you are using the cache, and at\nsome point `config.cache' contains results you don't want to keep, you\nmay remove or edit it.\n\n   The file `configure.ac' (or `configure.in') is used to create\n`configure' by a program called `autoconf'.  You need `configure.ac' if\nyou want to change it or regenerate `configure' using a newer version\nof `autoconf'.\n\n   The simplest way to compile this package is:\n\n  1. `cd' to the directory containing the package's source code and type\n     `./configure' to configure the package for your system.\n\n     Running `configure' might take a while.  While running, it prints\n     some messages telling which features it is checking for.\n\n  2. Type `make' to compile the package.\n\n  3. Optionally, type `make check' to run any self-tests that come with\n     the package, generally using the just-built uninstalled binaries.\n\n  4. Type `make install' to install the programs and any data files and\n     documentation.  When installing into a prefix owned by root, it is\n     recommended that the package be configured and built as a regular\n     user, and only the `make install' phase executed with root\n     privileges.\n\n  5. Optionally, type `make installcheck' to repeat any self-tests, but\n     this time using the binaries in their final installed location.\n     This target does not install anything.  Running this target as a\n     regular user, particularly if the prior `make install' required\n     root privileges, verifies that the installation completed\n     correctly.\n\n  6. You can remove the program binaries and object files from the\n     source code directory by typing `make clean'.  To also remove the\n     files that `configure' created (so you can compile the package for\n     a different kind of computer), type `make distclean'.  There is\n     also a `make maintainer-clean' target, but that is intended mainly\n     for the package's developers.  If you use it, you may have to get\n     all sorts of other programs in order to regenerate files that came\n     with the distribution.\n\n  7. Often, you can also type `make uninstall' to remove the installed\n     files again.  In practice, not all packages have tested that\n     uninstallation works correctly, even though it is required by the\n     GNU Coding Standards.\n\n  8. Some packages, particularly those that use Automake, provide `make\n     distcheck', which can by used by developers to test that all other\n     targets like `make install' and `make uninstall' work correctly.\n     This target is generally not run by end users.\n\nCompilers and Options\n=====================\n\n   Some systems require unusual options for compilation or linking that\nthe `configure' script does not know about.  Run `./configure --help'\nfor details on some of the pertinent environment variables.\n\n   You can give `configure' initial values for configuration parameters\nby setting variables in the command line or in the environment.  Here\nis an example:\n\n     ./configure CC=c99 CFLAGS=-g LIBS=-lposix\n\n   *Note Defining Variables::, for more details.\n\nCompiling For Multiple Architectures\n====================================\n\n   You can compile the package for more than one kind of computer at the\nsame time, by placing the object files for each architecture in their\nown directory.  To do this, you can use GNU `make'.  `cd' to the\ndirectory where you want the object files and executables to go and run\nthe `configure' script.  `configure' automatically checks for the\nsource code in the directory that `configure' is in and in `..'.  This\nis known as a \"VPATH\" build.\n\n   With a non-GNU `make', it is safer to compile the package for one\narchitecture at a time in the source code directory.  After you have\ninstalled the package for one architecture, use `make distclean' before\nreconfiguring for another architecture.\n\n   On MacOS X 10.5 and later systems, you can create libraries and\nexecutables that work on multiple system types--known as \"fat\" or\n\"universal\" binaries--by specifying multiple `-arch' options to the\ncompiler but only a single `-arch' option to the preprocessor.  Like\nthis:\n\n     ./configure CC=\"gcc -arch i386 -arch x86_64 -arch ppc -arch ppc64\" \\\n                 CXX=\"g++ -arch i386 -arch x86_64 -arch ppc -arch ppc64\" \\\n                 CPP=\"gcc -E\" CXXCPP=\"g++ -E\"\n\n   This is not guaranteed to produce working output in all cases, you\nmay have to build one architecture at a time and combine the results\nusing the `lipo' tool if you have problems.\n\nInstallation Names\n==================\n\n   By default, `make install' installs the package's commands under\n`/usr/local/bin', include files under `/usr/local/include', etc.  You\ncan specify an installation prefix other than `/usr/local' by giving\n`configure' the option `--prefix=PREFIX', where PREFIX must be an\nabsolute file name.\n\n   You can specify separate installation prefixes for\narchitecture-specific files and architecture-independent files.  If you\npass the option `--exec-prefix=PREFIX' to `configure', the package uses\nPREFIX as the prefix for installing programs and libraries.\nDocumentation and other data files still use the regular prefix.\n\n   In addition, if you use an unusual directory layout you can give\noptions like `--bindir=DIR' to specify different values for particular\nkinds of files.  Run `configure --help' for a list of the directories\nyou can set and what kinds of files go in them.  In general, the\ndefault for these options is expressed in terms of `${prefix}', so that\nspecifying just `--prefix' will affect all of the other directory\nspecifications that were not explicitly provided.\n\n   The most portable way to affect installation locations is to pass the\ncorrect locations to `configure'; however, many packages provide one or\nboth of the following shortcuts of passing variable assignments to the\n`make install' command line to change installation locations without\nhaving to reconfigure or recompile.\n\n   The first method involves providing an override variable for each\naffected directory.  For example, `make install\nprefix=/alternate/directory' will choose an alternate location for all\ndirectory configuration variables that were expressed in terms of\n`${prefix}'.  Any directories that were specified during `configure',\nbut not in terms of `${prefix}', must each be overridden at install\ntime for the entire installation to be relocated.  The approach of\nmakefile variable overrides for each directory variable is required by\nthe GNU Coding Standards, and ideally causes no recompilation.\nHowever, some platforms have known limitations with the semantics of\nshared libraries that end up requiring recompilation when using this\nmethod, particularly noticeable in packages that use GNU Libtool.\n\n   The second method involves providing the `DESTDIR' variable.  For\nexample, `make install DESTDIR=/alternate/directory' will prepend\n`/alternate/directory' before all installation names.  The approach of\n`DESTDIR' overrides is not required by the GNU Coding Standards, and\ndoes not work on platforms that have drive letters.  On the other hand,\nit does better at avoiding recompilation issues, and works well even\nwhen some directory options were not specified in terms of `${prefix}'\nat `configure' time.\n\nOptional Features\n=================\n\n   If the package supports it, you can cause programs to be installed\nwith an extra prefix or suffix on their names by giving `configure' the\noption `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'.\n\n   Some packages pay attention to `--enable-FEATURE' options to\n`configure', where FEATURE indicates an optional part of the package.\nThey may also pay attention to `--with-PACKAGE' options, where PACKAGE\nis something like `gnu-as' or `x' (for the X Window System).  The\n`README' should mention any `--enable-' and `--with-' options that the\npackage recognizes.\n\n   For packages that use the X Window System, `configure' can usually\nfind the X include and library files automatically, but if it doesn't,\nyou can use the `configure' options `--x-includes=DIR' and\n`--x-libraries=DIR' to specify their locations.\n\n   Some packages offer the ability to configure how verbose the\nexecution of `make' will be.  For these packages, running `./configure\n--enable-silent-rules' sets the default to minimal output, which can be\noverridden with `make V=1'; while running `./configure\n--disable-silent-rules' sets the default to verbose, which can be\noverridden with `make V=0'.\n\nParticular systems\n==================\n\n   On HP-UX, the default C compiler is not ANSI C compatible.  If GNU\nCC is not installed, it is recommended to use the following options in\norder to use an ANSI C compiler:\n\n     ./configure CC=\"cc -Ae -D_XOPEN_SOURCE=500\"\n\nand if that doesn't work, install pre-built binaries of GCC for HP-UX.\n\n   HP-UX `make' updates targets which have the same time stamps as\ntheir prerequisites, which makes it generally unusable when shipped\ngenerated files such as `configure' are involved.  Use GNU `make'\ninstead.\n\n   On OSF/1 a.k.a. Tru64, some versions of the default C compiler cannot\nparse its `<wchar.h>' header file.  The option `-nodtk' can be used as\na workaround.  If GNU CC is not installed, it is therefore recommended\nto try\n\n     ./configure CC=\"cc\"\n\nand if that doesn't work, try\n\n     ./configure CC=\"cc -nodtk\"\n\n   On Solaris, don't put `/usr/ucb' early in your `PATH'.  This\ndirectory contains several dysfunctional programs; working variants of\nthese programs are available in `/usr/bin'.  So, if you need `/usr/ucb'\nin your `PATH', put it _after_ `/usr/bin'.\n\n   On Haiku, software installed for all users goes in `/boot/common',\nnot `/usr/local'.  It is recommended to use the following options:\n\n     ./configure --prefix=/boot/common\n\nSpecifying the System Type\n==========================\n\n   There may be some features `configure' cannot figure out\nautomatically, but needs to determine by the type of machine the package\nwill run on.  Usually, assuming the package is built to be run on the\n_same_ architectures, `configure' can figure that out, but if it prints\na message saying it cannot guess the machine type, give it the\n`--build=TYPE' option.  TYPE can either be a short name for the system\ntype, such as `sun4', or a canonical name which has the form:\n\n     CPU-COMPANY-SYSTEM\n\nwhere SYSTEM can have one of these forms:\n\n     OS\n     KERNEL-OS\n\n   See the file `config.sub' for the possible values of each field.  If\n`config.sub' isn't included in this package, then this package doesn't\nneed to know the machine type.\n\n   If you are _building_ compiler tools for cross-compiling, you should\nuse the option `--target=TYPE' to select the type of system they will\nproduce code for.\n\n   If you want to _use_ a cross compiler, that generates code for a\nplatform different from the build platform, you should specify the\n\"host\" platform (i.e., that on which the generated programs will\neventually be run) with `--host=TYPE'.\n\nSharing Defaults\n================\n\n   If you want to set default values for `configure' scripts to share,\nyou can create a site shell script called `config.site' that gives\ndefault values for variables like `CC', `cache_file', and `prefix'.\n`configure' looks for `PREFIX/share/config.site' if it exists, then\n`PREFIX/etc/config.site' if it exists.  Or, you can set the\n`CONFIG_SITE' environment variable to the location of the site script.\nA warning: not all `configure' scripts look for a site script.\n\nDefining Variables\n==================\n\n   Variables not defined in a site shell script can be set in the\nenvironment passed to `configure'.  However, some packages may run\nconfigure again during the build, and the customized values of these\nvariables may be lost.  In order to avoid this problem, you should set\nthem in the `configure' command line, using `VAR=value'.  For example:\n\n     ./configure CC=/usr/local2/bin/gcc\n\ncauses the specified `gcc' to be used as the C compiler (unless it is\noverridden in the site shell script).\n\nUnfortunately, this technique does not work for `CONFIG_SHELL' due to\nan Autoconf limitation.  Until the limitation is lifted, you can use\nthis workaround:\n\n     CONFIG_SHELL=/bin/bash ./configure CONFIG_SHELL=/bin/bash\n\n`configure' Invocation\n======================\n\n   `configure' recognizes the following options to control how it\noperates.\n\n`--help'\n`-h'\n     Print a summary of all of the options to `configure', and exit.\n\n`--help=short'\n`--help=recursive'\n     Print a summary of the options unique to this package's\n     `configure', and exit.  The `short' variant lists options used\n     only in the top level, while the `recursive' variant lists options\n     also present in any nested packages.\n\n`--version'\n`-V'\n     Print the version of Autoconf used to generate the `configure'\n     script, and exit.\n\n`--cache-file=FILE'\n     Enable the cache: use and save the results of the tests in FILE,\n     traditionally `config.cache'.  FILE defaults to `/dev/null' to\n     disable caching.\n\n`--config-cache'\n`-C'\n     Alias for `--cache-file=config.cache'.\n\n`--quiet'\n`--silent'\n`-q'\n     Do not print messages saying which checks are being made.  To\n     suppress all normal output, redirect it to `/dev/null' (any error\n     messages will still be shown).\n\n`--srcdir=DIR'\n     Look for the package's source code in directory DIR.  Usually\n     `configure' can determine that directory automatically.\n\n`--prefix=DIR'\n     Use DIR as the installation prefix.  *note Installation Names::\n     for more details, including other options available for fine-tuning\n     the installation locations.\n\n`--no-create'\n`-n'\n     Run the configure checks, but stop before creating any output\n     files.\n\n`configure' also accepts some other, not widely useful, options.  Run\n`configure --help' for more details.\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/include/opus.h",
    "content": "/* Copyright (c) 2010-2011 Xiph.Org Foundation, Skype Limited\n   Written by Jean-Marc Valin and Koen Vos */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n/**\n * @file opus.h\n * @brief Opus reference implementation API\n */\n\n#ifndef OPUS_H\n#define OPUS_H\n\n#include \"opus_types.h\"\n#include \"opus_defines.h\"\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/**\n * @mainpage Opus\n *\n * The Opus codec is designed for interactive speech and audio transmission over the Internet.\n * It is designed by the IETF Codec Working Group and incorporates technology from\n * Skype's SILK codec and Xiph.Org's CELT codec.\n *\n * The Opus codec is designed to handle a wide range of interactive audio applications,\n * including Voice over IP, videoconferencing, in-game chat, and even remote live music\n * performances. It can scale from low bit-rate narrowband speech to very high quality\n * stereo music. Its main features are:\n\n * @li Sampling rates from 8 to 48 kHz\n * @li Bit-rates from 6 kb/s to 510 kb/s\n * @li Support for both constant bit-rate (CBR) and variable bit-rate (VBR)\n * @li Audio bandwidth from narrowband to full-band\n * @li Support for speech and music\n * @li Support for mono and stereo\n * @li Support for multichannel (up to 255 channels)\n * @li Frame sizes from 2.5 ms to 60 ms\n * @li Good loss robustness and packet loss concealment (PLC)\n * @li Floating point and fixed-point implementation\n *\n * Documentation sections:\n * @li @ref opus_encoder\n * @li @ref opus_decoder\n * @li @ref opus_repacketizer\n * @li @ref opus_multistream\n * @li @ref opus_libinfo\n * @li @ref opus_custom\n */\n\n/** @defgroup opus_encoder Opus Encoder\n  * @{\n  *\n  * @brief This page describes the process and functions used to encode Opus.\n  *\n  * Since Opus is a stateful codec, the encoding process starts with creating an encoder\n  * state. This can be done with:\n  *\n  * @code\n  * int          error;\n  * OpusEncoder *enc;\n  * enc = opus_encoder_create(Fs, channels, application, &error);\n  * @endcode\n  *\n  * From this point, @c enc can be used for encoding an audio stream. An encoder state\n  * @b must @b not be used for more than one stream at the same time. Similarly, the encoder\n  * state @b must @b not be re-initialized for each frame.\n  *\n  * While opus_encoder_create() allocates memory for the state, it's also possible\n  * to initialize pre-allocated memory:\n  *\n  * @code\n  * int          size;\n  * int          error;\n  * OpusEncoder *enc;\n  * size = opus_encoder_get_size(channels);\n  * enc = malloc(size);\n  * error = opus_encoder_init(enc, Fs, channels, application);\n  * @endcode\n  *\n  * where opus_encoder_get_size() returns the required size for the encoder state. Note that\n  * future versions of this code may change the size, so no assuptions should be made about it.\n  *\n  * The encoder state is always continuous in memory and only a shallow copy is sufficient\n  * to copy it (e.g. memcpy())\n  *\n  * It is possible to change some of the encoder's settings using the opus_encoder_ctl()\n  * interface. All these settings already default to the recommended value, so they should\n  * only be changed when necessary. The most common settings one may want to change are:\n  *\n  * @code\n  * opus_encoder_ctl(enc, OPUS_SET_BITRATE(bitrate));\n  * opus_encoder_ctl(enc, OPUS_SET_COMPLEXITY(complexity));\n  * opus_encoder_ctl(enc, OPUS_SET_SIGNAL(signal_type));\n  * @endcode\n  *\n  * where\n  *\n  * @arg bitrate is in bits per second (b/s)\n  * @arg complexity is a value from 1 to 10, where 1 is the lowest complexity and 10 is the highest\n  * @arg signal_type is either OPUS_AUTO (default), OPUS_SIGNAL_VOICE, or OPUS_SIGNAL_MUSIC\n  *\n  * See @ref opus_encoderctls and @ref opus_genericctls for a complete list of parameters that can be set or queried. Most parameters can be set or changed at any time during a stream.\n  *\n  * To encode a frame, opus_encode() or opus_encode_float() must be called with exactly one frame (2.5, 5, 10, 20, 40 or 60 ms) of audio data:\n  * @code\n  * len = opus_encode(enc, audio_frame, frame_size, packet, max_packet);\n  * @endcode\n  *\n  * where\n  * <ul>\n  * <li>audio_frame is the audio data in opus_int16 (or float for opus_encode_float())</li>\n  * <li>frame_size is the duration of the frame in samples (per channel)</li>\n  * <li>packet is the byte array to which the compressed data is written</li>\n  * <li>max_packet is the maximum number of bytes that can be written in the packet (4000 bytes is recommended).\n  *     Do not use max_packet to control VBR target bitrate, instead use the #OPUS_SET_BITRATE CTL.</li>\n  * </ul>\n  *\n  * opus_encode() and opus_encode_float() return the number of bytes actually written to the packet.\n  * The return value <b>can be negative</b>, which indicates that an error has occurred. If the return value\n  * is 2 bytes or less, then the packet does not need to be transmitted (DTX).\n  *\n  * Once the encoder state if no longer needed, it can be destroyed with\n  *\n  * @code\n  * opus_encoder_destroy(enc);\n  * @endcode\n  *\n  * If the encoder was created with opus_encoder_init() rather than opus_encoder_create(),\n  * then no action is required aside from potentially freeing the memory that was manually\n  * allocated for it (calling free(enc) for the example above)\n  *\n  */\n\n/** Opus encoder state.\n  * This contains the complete state of an Opus encoder.\n  * It is position independent and can be freely copied.\n  * @see opus_encoder_create,opus_encoder_init\n  */\ntypedef struct OpusEncoder OpusEncoder;\n\n/** Gets the size of an <code>OpusEncoder</code> structure.\n  * @param[in] channels <tt>int</tt>: Number of channels.\n  *                                   This must be 1 or 2.\n  * @returns The size in bytes.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_encoder_get_size(int channels);\n\n/**\n */\n\n/** Allocates and initializes an encoder state.\n * There are three coding modes:\n *\n * @ref OPUS_APPLICATION_VOIP gives best quality at a given bitrate for voice\n *    signals. It enhances the  input signal by high-pass filtering and\n *    emphasizing formants and harmonics. Optionally  it includes in-band\n *    forward error correction to protect against packet loss. Use this\n *    mode for typical VoIP applications. Because of the enhancement,\n *    even at high bitrates the output may sound different from the input.\n *\n * @ref OPUS_APPLICATION_AUDIO gives best quality at a given bitrate for most\n *    non-voice signals like music. Use this mode for music and mixed\n *    (music/voice) content, broadcast, and applications requiring less\n *    than 15 ms of coding delay.\n *\n * @ref OPUS_APPLICATION_RESTRICTED_LOWDELAY configures low-delay mode that\n *    disables the speech-optimized mode in exchange for slightly reduced delay.\n *    This mode can only be set on an newly initialized or freshly reset encoder\n *    because it changes the codec delay.\n *\n * This is useful when the caller knows that the speech-optimized modes will not be needed (use with caution).\n * @param [in] Fs <tt>opus_int32</tt>: Sampling rate of input signal (Hz)\n *                                     This must be one of 8000, 12000, 16000,\n *                                     24000, or 48000.\n * @param [in] channels <tt>int</tt>: Number of channels (1 or 2) in input signal\n * @param [in] application <tt>int</tt>: Coding mode (@ref OPUS_APPLICATION_VOIP/@ref OPUS_APPLICATION_AUDIO/@ref OPUS_APPLICATION_RESTRICTED_LOWDELAY)\n * @param [out] error <tt>int*</tt>: @ref opus_errorcodes\n * @note Regardless of the sampling rate and number channels selected, the Opus encoder\n * can switch to a lower audio bandwidth or number of channels if the bitrate\n * selected is too low. This also means that it is safe to always use 48 kHz stereo input\n * and let the encoder optimize the encoding.\n */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT OpusEncoder *opus_encoder_create(\n    opus_int32 Fs,\n    int channels,\n    int application,\n    int *error\n);\n\n/** Initializes a previously allocated encoder state\n  * The memory pointed to by st must be at least the size returned by opus_encoder_get_size().\n  * This is intended for applications which use their own allocator instead of malloc.\n  * @see opus_encoder_create(),opus_encoder_get_size()\n  * To reset a previously initialized state, use the #OPUS_RESET_STATE CTL.\n  * @param [in] st <tt>OpusEncoder*</tt>: Encoder state\n  * @param [in] Fs <tt>opus_int32</tt>: Sampling rate of input signal (Hz)\n *                                      This must be one of 8000, 12000, 16000,\n *                                      24000, or 48000.\n  * @param [in] channels <tt>int</tt>: Number of channels (1 or 2) in input signal\n  * @param [in] application <tt>int</tt>: Coding mode (OPUS_APPLICATION_VOIP/OPUS_APPLICATION_AUDIO/OPUS_APPLICATION_RESTRICTED_LOWDELAY)\n  * @retval #OPUS_OK Success or @ref opus_errorcodes\n  */\nOPUS_EXPORT int opus_encoder_init(\n    OpusEncoder *st,\n    opus_int32 Fs,\n    int channels,\n    int application\n) OPUS_ARG_NONNULL(1);\n\n/** Encodes an Opus frame.\n  * @param [in] st <tt>OpusEncoder*</tt>: Encoder state\n  * @param [in] pcm <tt>opus_int16*</tt>: Input signal (interleaved if 2 channels). length is frame_size*channels*sizeof(opus_int16)\n  * @param [in] frame_size <tt>int</tt>: Number of samples per channel in the\n  *                                      input signal.\n  *                                      This must be an Opus frame size for\n  *                                      the encoder's sampling rate.\n  *                                      For example, at 48 kHz the permitted\n  *                                      values are 120, 240, 480, 960, 1920,\n  *                                      and 2880.\n  *                                      Passing in a duration of less than\n  *                                      10 ms (480 samples at 48 kHz) will\n  *                                      prevent the encoder from using the LPC\n  *                                      or hybrid modes.\n  * @param [out] data <tt>unsigned char*</tt>: Output payload.\n  *                                            This must contain storage for at\n  *                                            least \\a max_data_bytes.\n  * @param [in] max_data_bytes <tt>opus_int32</tt>: Size of the allocated\n  *                                                 memory for the output\n  *                                                 payload. This may be\n  *                                                 used to impose an upper limit on\n  *                                                 the instant bitrate, but should\n  *                                                 not be used as the only bitrate\n  *                                                 control. Use #OPUS_SET_BITRATE to\n  *                                                 control the bitrate.\n  * @returns The length of the encoded packet (in bytes) on success or a\n  *          negative error code (see @ref opus_errorcodes) on failure.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT opus_int32 opus_encode(\n    OpusEncoder *st,\n    const opus_int16 *pcm,\n    int frame_size,\n    unsigned char *data,\n    opus_int32 max_data_bytes\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(2) OPUS_ARG_NONNULL(4);\n\n/** Encodes an Opus frame from floating point input.\n  * @param [in] st <tt>OpusEncoder*</tt>: Encoder state\n  * @param [in] pcm <tt>float*</tt>: Input in float format (interleaved if 2 channels), with a normal range of +/-1.0.\n  *          Samples with a range beyond +/-1.0 are supported but will\n  *          be clipped by decoders using the integer API and should\n  *          only be used if it is known that the far end supports\n  *          extended dynamic range.\n  *          length is frame_size*channels*sizeof(float)\n  * @param [in] frame_size <tt>int</tt>: Number of samples per channel in the\n  *                                      input signal.\n  *                                      This must be an Opus frame size for\n  *                                      the encoder's sampling rate.\n  *                                      For example, at 48 kHz the permitted\n  *                                      values are 120, 240, 480, 960, 1920,\n  *                                      and 2880.\n  *                                      Passing in a duration of less than\n  *                                      10 ms (480 samples at 48 kHz) will\n  *                                      prevent the encoder from using the LPC\n  *                                      or hybrid modes.\n  * @param [out] data <tt>unsigned char*</tt>: Output payload.\n  *                                            This must contain storage for at\n  *                                            least \\a max_data_bytes.\n  * @param [in] max_data_bytes <tt>opus_int32</tt>: Size of the allocated\n  *                                                 memory for the output\n  *                                                 payload. This may be\n  *                                                 used to impose an upper limit on\n  *                                                 the instant bitrate, but should\n  *                                                 not be used as the only bitrate\n  *                                                 control. Use #OPUS_SET_BITRATE to\n  *                                                 control the bitrate.\n  * @returns The length of the encoded packet (in bytes) on success or a\n  *          negative error code (see @ref opus_errorcodes) on failure.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT opus_int32 opus_encode_float(\n    OpusEncoder *st,\n    const float *pcm,\n    int frame_size,\n    unsigned char *data,\n    opus_int32 max_data_bytes\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(2) OPUS_ARG_NONNULL(4);\n\n/** Frees an <code>OpusEncoder</code> allocated by opus_encoder_create().\n  * @param[in] st <tt>OpusEncoder*</tt>: State to be freed.\n  */\nOPUS_EXPORT void opus_encoder_destroy(OpusEncoder *st);\n\n/** Perform a CTL function on an Opus encoder.\n  *\n  * Generally the request and subsequent arguments are generated\n  * by a convenience macro.\n  * @param st <tt>OpusEncoder*</tt>: Encoder state.\n  * @param request This and all remaining parameters should be replaced by one\n  *                of the convenience macros in @ref opus_genericctls or\n  *                @ref opus_encoderctls.\n  * @see opus_genericctls\n  * @see opus_encoderctls\n  */\nOPUS_EXPORT int opus_encoder_ctl(OpusEncoder *st, int request, ...) OPUS_ARG_NONNULL(1);\n/**@}*/\n\n/** @defgroup opus_decoder Opus Decoder\n  * @{\n  *\n  * @brief This page describes the process and functions used to decode Opus.\n  *\n  * The decoding process also starts with creating a decoder\n  * state. This can be done with:\n  * @code\n  * int          error;\n  * OpusDecoder *dec;\n  * dec = opus_decoder_create(Fs, channels, &error);\n  * @endcode\n  * where\n  * @li Fs is the sampling rate and must be 8000, 12000, 16000, 24000, or 48000\n  * @li channels is the number of channels (1 or 2)\n  * @li error will hold the error code in case of failure (or #OPUS_OK on success)\n  * @li the return value is a newly created decoder state to be used for decoding\n  *\n  * While opus_decoder_create() allocates memory for the state, it's also possible\n  * to initialize pre-allocated memory:\n  * @code\n  * int          size;\n  * int          error;\n  * OpusDecoder *dec;\n  * size = opus_decoder_get_size(channels);\n  * dec = malloc(size);\n  * error = opus_decoder_init(dec, Fs, channels);\n  * @endcode\n  * where opus_decoder_get_size() returns the required size for the decoder state. Note that\n  * future versions of this code may change the size, so no assuptions should be made about it.\n  *\n  * The decoder state is always continuous in memory and only a shallow copy is sufficient\n  * to copy it (e.g. memcpy())\n  *\n  * To decode a frame, opus_decode() or opus_decode_float() must be called with a packet of compressed audio data:\n  * @code\n  * frame_size = opus_decode(dec, packet, len, decoded, max_size, 0);\n  * @endcode\n  * where\n  *\n  * @li packet is the byte array containing the compressed data\n  * @li len is the exact number of bytes contained in the packet\n  * @li decoded is the decoded audio data in opus_int16 (or float for opus_decode_float())\n  * @li max_size is the max duration of the frame in samples (per channel) that can fit into the decoded_frame array\n  *\n  * opus_decode() and opus_decode_float() return the number of samples (per channel) decoded from the packet.\n  * If that value is negative, then an error has occurred. This can occur if the packet is corrupted or if the audio\n  * buffer is too small to hold the decoded audio.\n  *\n  * Opus is a stateful codec with overlapping blocks and as a result Opus\n  * packets are not coded independently of each other. Packets must be\n  * passed into the decoder serially and in the correct order for a correct\n  * decode. Lost packets can be replaced with loss concealment by calling\n  * the decoder with a null pointer and zero length for the missing packet.\n  *\n  * A single codec state may only be accessed from a single thread at\n  * a time and any required locking must be performed by the caller. Separate\n  * streams must be decoded with separate decoder states and can be decoded\n  * in parallel unless the library was compiled with NONTHREADSAFE_PSEUDOSTACK\n  * defined.\n  *\n  */\n\n/** Opus decoder state.\n  * This contains the complete state of an Opus decoder.\n  * It is position independent and can be freely copied.\n  * @see opus_decoder_create,opus_decoder_init\n  */\ntypedef struct OpusDecoder OpusDecoder;\n\n/** Gets the size of an <code>OpusDecoder</code> structure.\n  * @param [in] channels <tt>int</tt>: Number of channels.\n  *                                    This must be 1 or 2.\n  * @returns The size in bytes.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_decoder_get_size(int channels);\n\n/** Allocates and initializes a decoder state.\n  * @param [in] Fs <tt>opus_int32</tt>: Sample rate to decode at (Hz).\n  *                                     This must be one of 8000, 12000, 16000,\n  *                                     24000, or 48000.\n  * @param [in] channels <tt>int</tt>: Number of channels (1 or 2) to decode\n  * @param [out] error <tt>int*</tt>: #OPUS_OK Success or @ref opus_errorcodes\n  *\n  * Internally Opus stores data at 48000 Hz, so that should be the default\n  * value for Fs. However, the decoder can efficiently decode to buffers\n  * at 8, 12, 16, and 24 kHz so if for some reason the caller cannot use\n  * data at the full sample rate, or knows the compressed data doesn't\n  * use the full frequency range, it can request decoding at a reduced\n  * rate. Likewise, the decoder is capable of filling in either mono or\n  * interleaved stereo pcm buffers, at the caller's request.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT OpusDecoder *opus_decoder_create(\n    opus_int32 Fs,\n    int channels,\n    int *error\n);\n\n/** Initializes a previously allocated decoder state.\n  * The state must be at least the size returned by opus_decoder_get_size().\n  * This is intended for applications which use their own allocator instead of malloc. @see opus_decoder_create,opus_decoder_get_size\n  * To reset a previously initialized state, use the #OPUS_RESET_STATE CTL.\n  * @param [in] st <tt>OpusDecoder*</tt>: Decoder state.\n  * @param [in] Fs <tt>opus_int32</tt>: Sampling rate to decode to (Hz).\n  *                                     This must be one of 8000, 12000, 16000,\n  *                                     24000, or 48000.\n  * @param [in] channels <tt>int</tt>: Number of channels (1 or 2) to decode\n  * @retval #OPUS_OK Success or @ref opus_errorcodes\n  */\nOPUS_EXPORT int opus_decoder_init(\n    OpusDecoder *st,\n    opus_int32 Fs,\n    int channels\n) OPUS_ARG_NONNULL(1);\n\n/** Decode an Opus packet.\n  * @param [in] st <tt>OpusDecoder*</tt>: Decoder state\n  * @param [in] data <tt>char*</tt>: Input payload. Use a NULL pointer to indicate packet loss\n  * @param [in] len <tt>opus_int32</tt>: Number of bytes in payload*\n  * @param [out] pcm <tt>opus_int16*</tt>: Output signal (interleaved if 2 channels). length\n  *  is frame_size*channels*sizeof(opus_int16)\n  * @param [in] frame_size Number of samples per channel of available space in \\a pcm.\n  *  If this is less than the maximum packet duration (120ms; 5760 for 48kHz), this function will\n  *  not be capable of decoding some packets. In the case of PLC (data==NULL) or FEC (decode_fec=1),\n  *  then frame_size needs to be exactly the duration of audio that is missing, otherwise the\n  *  decoder will not be in the optimal state to decode the next incoming packet. For the PLC and\n  *  FEC cases, frame_size <b>must</b> be a multiple of 2.5 ms.\n  * @param [in] decode_fec <tt>int</tt>: Flag (0 or 1) to request that any in-band forward error correction data be\n  *  decoded. If no such data is available, the frame is decoded as if it were lost.\n  * @returns Number of decoded samples or @ref opus_errorcodes\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_decode(\n    OpusDecoder *st,\n    const unsigned char *data,\n    opus_int32 len,\n    opus_int16 *pcm,\n    int frame_size,\n    int decode_fec\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(4);\n\n/** Decode an Opus packet with floating point output.\n  * @param [in] st <tt>OpusDecoder*</tt>: Decoder state\n  * @param [in] data <tt>char*</tt>: Input payload. Use a NULL pointer to indicate packet loss\n  * @param [in] len <tt>opus_int32</tt>: Number of bytes in payload\n  * @param [out] pcm <tt>float*</tt>: Output signal (interleaved if 2 channels). length\n  *  is frame_size*channels*sizeof(float)\n  * @param [in] frame_size Number of samples per channel of available space in \\a pcm.\n  *  If this is less than the maximum packet duration (120ms; 5760 for 48kHz), this function will\n  *  not be capable of decoding some packets. In the case of PLC (data==NULL) or FEC (decode_fec=1),\n  *  then frame_size needs to be exactly the duration of audio that is missing, otherwise the\n  *  decoder will not be in the optimal state to decode the next incoming packet. For the PLC and\n  *  FEC cases, frame_size <b>must</b> be a multiple of 2.5 ms.\n  * @param [in] decode_fec <tt>int</tt>: Flag (0 or 1) to request that any in-band forward error correction data be\n  *  decoded. If no such data is available the frame is decoded as if it were lost.\n  * @returns Number of decoded samples or @ref opus_errorcodes\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_decode_float(\n    OpusDecoder *st,\n    const unsigned char *data,\n    opus_int32 len,\n    float *pcm,\n    int frame_size,\n    int decode_fec\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(4);\n\n/** Perform a CTL function on an Opus decoder.\n  *\n  * Generally the request and subsequent arguments are generated\n  * by a convenience macro.\n  * @param st <tt>OpusDecoder*</tt>: Decoder state.\n  * @param request This and all remaining parameters should be replaced by one\n  *                of the convenience macros in @ref opus_genericctls or\n  *                @ref opus_decoderctls.\n  * @see opus_genericctls\n  * @see opus_decoderctls\n  */\nOPUS_EXPORT int opus_decoder_ctl(OpusDecoder *st, int request, ...) OPUS_ARG_NONNULL(1);\n\n/** Frees an <code>OpusDecoder</code> allocated by opus_decoder_create().\n  * @param[in] st <tt>OpusDecoder*</tt>: State to be freed.\n  */\nOPUS_EXPORT void opus_decoder_destroy(OpusDecoder *st);\n\n/** Parse an opus packet into one or more frames.\n  * Opus_decode will perform this operation internally so most applications do\n  * not need to use this function.\n  * This function does not copy the frames, the returned pointers are pointers into\n  * the input packet.\n  * @param [in] data <tt>char*</tt>: Opus packet to be parsed\n  * @param [in] len <tt>opus_int32</tt>: size of data\n  * @param [out] out_toc <tt>char*</tt>: TOC pointer\n  * @param [out] frames <tt>char*[48]</tt> encapsulated frames\n  * @param [out] size <tt>opus_int16[48]</tt> sizes of the encapsulated frames\n  * @param [out] payload_offset <tt>int*</tt>: returns the position of the payload within the packet (in bytes)\n  * @returns number of frames\n  */\nOPUS_EXPORT int opus_packet_parse(\n   const unsigned char *data,\n   opus_int32 len,\n   unsigned char *out_toc,\n   const unsigned char *frames[48],\n   opus_int16 size[48],\n   int *payload_offset\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(4);\n\n/** Gets the bandwidth of an Opus packet.\n  * @param [in] data <tt>char*</tt>: Opus packet\n  * @retval OPUS_BANDWIDTH_NARROWBAND Narrowband (4kHz bandpass)\n  * @retval OPUS_BANDWIDTH_MEDIUMBAND Mediumband (6kHz bandpass)\n  * @retval OPUS_BANDWIDTH_WIDEBAND Wideband (8kHz bandpass)\n  * @retval OPUS_BANDWIDTH_SUPERWIDEBAND Superwideband (12kHz bandpass)\n  * @retval OPUS_BANDWIDTH_FULLBAND Fullband (20kHz bandpass)\n  * @retval OPUS_INVALID_PACKET The compressed data passed is corrupted or of an unsupported type\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_packet_get_bandwidth(const unsigned char *data) OPUS_ARG_NONNULL(1);\n\n/** Gets the number of samples per frame from an Opus packet.\n  * @param [in] data <tt>char*</tt>: Opus packet.\n  *                                  This must contain at least one byte of\n  *                                  data.\n  * @param [in] Fs <tt>opus_int32</tt>: Sampling rate in Hz.\n  *                                     This must be a multiple of 400, or\n  *                                     inaccurate results will be returned.\n  * @returns Number of samples per frame.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_packet_get_samples_per_frame(const unsigned char *data, opus_int32 Fs) OPUS_ARG_NONNULL(1);\n\n/** Gets the number of channels from an Opus packet.\n  * @param [in] data <tt>char*</tt>: Opus packet\n  * @returns Number of channels\n  * @retval OPUS_INVALID_PACKET The compressed data passed is corrupted or of an unsupported type\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_packet_get_nb_channels(const unsigned char *data) OPUS_ARG_NONNULL(1);\n\n/** Gets the number of frames in an Opus packet.\n  * @param [in] packet <tt>char*</tt>: Opus packet\n  * @param [in] len <tt>opus_int32</tt>: Length of packet\n  * @returns Number of frames\n  * @retval OPUS_BAD_ARG Insufficient data was passed to the function\n  * @retval OPUS_INVALID_PACKET The compressed data passed is corrupted or of an unsupported type\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_packet_get_nb_frames(const unsigned char packet[], opus_int32 len) OPUS_ARG_NONNULL(1);\n\n/** Gets the number of samples of an Opus packet.\n  * @param [in] packet <tt>char*</tt>: Opus packet\n  * @param [in] len <tt>opus_int32</tt>: Length of packet\n  * @param [in] Fs <tt>opus_int32</tt>: Sampling rate in Hz.\n  *                                     This must be a multiple of 400, or\n  *                                     inaccurate results will be returned.\n  * @returns Number of samples\n  * @retval OPUS_BAD_ARG Insufficient data was passed to the function\n  * @retval OPUS_INVALID_PACKET The compressed data passed is corrupted or of an unsupported type\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_packet_get_nb_samples(const unsigned char packet[], opus_int32 len, opus_int32 Fs) OPUS_ARG_NONNULL(1);\n\n/** Gets the number of samples of an Opus packet.\n  * @param [in] dec <tt>OpusDecoder*</tt>: Decoder state\n  * @param [in] packet <tt>char*</tt>: Opus packet\n  * @param [in] len <tt>opus_int32</tt>: Length of packet\n  * @returns Number of samples\n  * @retval OPUS_BAD_ARG Insufficient data was passed to the function\n  * @retval OPUS_INVALID_PACKET The compressed data passed is corrupted or of an unsupported type\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_decoder_get_nb_samples(const OpusDecoder *dec, const unsigned char packet[], opus_int32 len) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(2);\n\n/** Applies soft-clipping to bring a float signal within the [-1,1] range. If\n  * the signal is already in that range, nothing is done. If there are values\n  * outside of [-1,1], then the signal is clipped as smoothly as possible to\n  * both fit in the range and avoid creating excessive distortion in the\n  * process.\n  * @param [in,out] pcm <tt>float*</tt>: Input PCM and modified PCM\n  * @param [in] frame_size <tt>int</tt> Number of samples per channel to process\n  * @param [in] channels <tt>int</tt>: Number of channels\n  * @param [in,out] softclip_mem <tt>float*</tt>: State memory for the soft clipping process (one float per channel, initialized to zero)\n  */\nOPUS_EXPORT void opus_pcm_soft_clip(float *pcm, int frame_size, int channels, float *softclip_mem);\n\n\n/**@}*/\n\n/** @defgroup opus_repacketizer Repacketizer\n  * @{\n  *\n  * The repacketizer can be used to merge multiple Opus packets into a single\n  * packet or alternatively to split Opus packets that have previously been\n  * merged. Splitting valid Opus packets is always guaranteed to succeed,\n  * whereas merging valid packets only succeeds if all frames have the same\n  * mode, bandwidth, and frame size, and when the total duration of the merged\n  * packet is no more than 120 ms. The 120 ms limit comes from the\n  * specification and limits decoder memory requirements at a point where\n  * framing overhead becomes negligible.\n  *\n  * The repacketizer currently only operates on elementary Opus\n  * streams. It will not manipualte multistream packets successfully, except in\n  * the degenerate case where they consist of data from a single stream.\n  *\n  * The repacketizing process starts with creating a repacketizer state, either\n  * by calling opus_repacketizer_create() or by allocating the memory yourself,\n  * e.g.,\n  * @code\n  * OpusRepacketizer *rp;\n  * rp = (OpusRepacketizer*)malloc(opus_repacketizer_get_size());\n  * if (rp != NULL)\n  *     opus_repacketizer_init(rp);\n  * @endcode\n  *\n  * Then the application should submit packets with opus_repacketizer_cat(),\n  * extract new packets with opus_repacketizer_out() or\n  * opus_repacketizer_out_range(), and then reset the state for the next set of\n  * input packets via opus_repacketizer_init().\n  *\n  * For example, to split a sequence of packets into individual frames:\n  * @code\n  * unsigned char *data;\n  * int len;\n  * while (get_next_packet(&data, &len))\n  * {\n  *   unsigned char out[1276];\n  *   opus_int32 out_len;\n  *   int nb_frames;\n  *   int err;\n  *   int i;\n  *   err = opus_repacketizer_cat(rp, data, len);\n  *   if (err != OPUS_OK)\n  *   {\n  *     release_packet(data);\n  *     return err;\n  *   }\n  *   nb_frames = opus_repacketizer_get_nb_frames(rp);\n  *   for (i = 0; i < nb_frames; i++)\n  *   {\n  *     out_len = opus_repacketizer_out_range(rp, i, i+1, out, sizeof(out));\n  *     if (out_len < 0)\n  *     {\n  *        release_packet(data);\n  *        return (int)out_len;\n  *     }\n  *     output_next_packet(out, out_len);\n  *   }\n  *   opus_repacketizer_init(rp);\n  *   release_packet(data);\n  * }\n  * @endcode\n  *\n  * Alternatively, to combine a sequence of frames into packets that each\n  * contain up to <code>TARGET_DURATION_MS</code> milliseconds of data:\n  * @code\n  * // The maximum number of packets with duration TARGET_DURATION_MS occurs\n  * // when the frame size is 2.5 ms, for a total of (TARGET_DURATION_MS*2/5)\n  * // packets.\n  * unsigned char *data[(TARGET_DURATION_MS*2/5)+1];\n  * opus_int32 len[(TARGET_DURATION_MS*2/5)+1];\n  * int nb_packets;\n  * unsigned char out[1277*(TARGET_DURATION_MS*2/2)];\n  * opus_int32 out_len;\n  * int prev_toc;\n  * nb_packets = 0;\n  * while (get_next_packet(data+nb_packets, len+nb_packets))\n  * {\n  *   int nb_frames;\n  *   int err;\n  *   nb_frames = opus_packet_get_nb_frames(data[nb_packets], len[nb_packets]);\n  *   if (nb_frames < 1)\n  *   {\n  *     release_packets(data, nb_packets+1);\n  *     return nb_frames;\n  *   }\n  *   nb_frames += opus_repacketizer_get_nb_frames(rp);\n  *   // If adding the next packet would exceed our target, or it has an\n  *   // incompatible TOC sequence, output the packets we already have before\n  *   // submitting it.\n  *   // N.B., The nb_packets > 0 check ensures we've submitted at least one\n  *   // packet since the last call to opus_repacketizer_init(). Otherwise a\n  *   // single packet longer than TARGET_DURATION_MS would cause us to try to\n  *   // output an (invalid) empty packet. It also ensures that prev_toc has\n  *   // been set to a valid value. Additionally, len[nb_packets] > 0 is\n  *   // guaranteed by the call to opus_packet_get_nb_frames() above, so the\n  *   // reference to data[nb_packets][0] should be valid.\n  *   if (nb_packets > 0 && (\n  *       ((prev_toc & 0xFC) != (data[nb_packets][0] & 0xFC)) ||\n  *       opus_packet_get_samples_per_frame(data[nb_packets], 48000)*nb_frames >\n  *       TARGET_DURATION_MS*48))\n  *   {\n  *     out_len = opus_repacketizer_out(rp, out, sizeof(out));\n  *     if (out_len < 0)\n  *     {\n  *        release_packets(data, nb_packets+1);\n  *        return (int)out_len;\n  *     }\n  *     output_next_packet(out, out_len);\n  *     opus_repacketizer_init(rp);\n  *     release_packets(data, nb_packets);\n  *     data[0] = data[nb_packets];\n  *     len[0] = len[nb_packets];\n  *     nb_packets = 0;\n  *   }\n  *   err = opus_repacketizer_cat(rp, data[nb_packets], len[nb_packets]);\n  *   if (err != OPUS_OK)\n  *   {\n  *     release_packets(data, nb_packets+1);\n  *     return err;\n  *   }\n  *   prev_toc = data[nb_packets][0];\n  *   nb_packets++;\n  * }\n  * // Output the final, partial packet.\n  * if (nb_packets > 0)\n  * {\n  *   out_len = opus_repacketizer_out(rp, out, sizeof(out));\n  *   release_packets(data, nb_packets);\n  *   if (out_len < 0)\n  *     return (int)out_len;\n  *   output_next_packet(out, out_len);\n  * }\n  * @endcode\n  *\n  * An alternate way of merging packets is to simply call opus_repacketizer_cat()\n  * unconditionally until it fails. At that point, the merged packet can be\n  * obtained with opus_repacketizer_out() and the input packet for which\n  * opus_repacketizer_cat() needs to be re-added to a newly reinitialized\n  * repacketizer state.\n  */\n\ntypedef struct OpusRepacketizer OpusRepacketizer;\n\n/** Gets the size of an <code>OpusRepacketizer</code> structure.\n  * @returns The size in bytes.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_repacketizer_get_size(void);\n\n/** (Re)initializes a previously allocated repacketizer state.\n  * The state must be at least the size returned by opus_repacketizer_get_size().\n  * This can be used for applications which use their own allocator instead of\n  * malloc().\n  * It must also be called to reset the queue of packets waiting to be\n  * repacketized, which is necessary if the maximum packet duration of 120 ms\n  * is reached or if you wish to submit packets with a different Opus\n  * configuration (coding mode, audio bandwidth, frame size, or channel count).\n  * Failure to do so will prevent a new packet from being added with\n  * opus_repacketizer_cat().\n  * @see opus_repacketizer_create\n  * @see opus_repacketizer_get_size\n  * @see opus_repacketizer_cat\n  * @param rp <tt>OpusRepacketizer*</tt>: The repacketizer state to\n  *                                       (re)initialize.\n  * @returns A pointer to the same repacketizer state that was passed in.\n  */\nOPUS_EXPORT OpusRepacketizer *opus_repacketizer_init(OpusRepacketizer *rp) OPUS_ARG_NONNULL(1);\n\n/** Allocates memory and initializes the new repacketizer with\n * opus_repacketizer_init().\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT OpusRepacketizer *opus_repacketizer_create(void);\n\n/** Frees an <code>OpusRepacketizer</code> allocated by\n  * opus_repacketizer_create().\n  * @param[in] rp <tt>OpusRepacketizer*</tt>: State to be freed.\n  */\nOPUS_EXPORT void opus_repacketizer_destroy(OpusRepacketizer *rp);\n\n/** Add a packet to the current repacketizer state.\n  * This packet must match the configuration of any packets already submitted\n  * for repacketization since the last call to opus_repacketizer_init().\n  * This means that it must have the same coding mode, audio bandwidth, frame\n  * size, and channel count.\n  * This can be checked in advance by examining the top 6 bits of the first\n  * byte of the packet, and ensuring they match the top 6 bits of the first\n  * byte of any previously submitted packet.\n  * The total duration of audio in the repacketizer state also must not exceed\n  * 120 ms, the maximum duration of a single packet, after adding this packet.\n  *\n  * The contents of the current repacketizer state can be extracted into new\n  * packets using opus_repacketizer_out() or opus_repacketizer_out_range().\n  *\n  * In order to add a packet with a different configuration or to add more\n  * audio beyond 120 ms, you must clear the repacketizer state by calling\n  * opus_repacketizer_init().\n  * If a packet is too large to add to the current repacketizer state, no part\n  * of it is added, even if it contains multiple frames, some of which might\n  * fit.\n  * If you wish to be able to add parts of such packets, you should first use\n  * another repacketizer to split the packet into pieces and add them\n  * individually.\n  * @see opus_repacketizer_out_range\n  * @see opus_repacketizer_out\n  * @see opus_repacketizer_init\n  * @param rp <tt>OpusRepacketizer*</tt>: The repacketizer state to which to\n  *                                       add the packet.\n  * @param[in] data <tt>const unsigned char*</tt>: The packet data.\n  *                                                The application must ensure\n  *                                                this pointer remains valid\n  *                                                until the next call to\n  *                                                opus_repacketizer_init() or\n  *                                                opus_repacketizer_destroy().\n  * @param len <tt>opus_int32</tt>: The number of bytes in the packet data.\n  * @returns An error code indicating whether or not the operation succeeded.\n  * @retval #OPUS_OK The packet's contents have been added to the repacketizer\n  *                  state.\n  * @retval #OPUS_INVALID_PACKET The packet did not have a valid TOC sequence,\n  *                              the packet's TOC sequence was not compatible\n  *                              with previously submitted packets (because\n  *                              the coding mode, audio bandwidth, frame size,\n  *                              or channel count did not match), or adding\n  *                              this packet would increase the total amount of\n  *                              audio stored in the repacketizer state to more\n  *                              than 120 ms.\n  */\nOPUS_EXPORT int opus_repacketizer_cat(OpusRepacketizer *rp, const unsigned char *data, opus_int32 len) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(2);\n\n\n/** Construct a new packet from data previously submitted to the repacketizer\n  * state via opus_repacketizer_cat().\n  * @param rp <tt>OpusRepacketizer*</tt>: The repacketizer state from which to\n  *                                       construct the new packet.\n  * @param begin <tt>int</tt>: The index of the first frame in the current\n  *                            repacketizer state to include in the output.\n  * @param end <tt>int</tt>: One past the index of the last frame in the\n  *                          current repacketizer state to include in the\n  *                          output.\n  * @param[out] data <tt>const unsigned char*</tt>: The buffer in which to\n  *                                                 store the output packet.\n  * @param maxlen <tt>opus_int32</tt>: The maximum number of bytes to store in\n  *                                    the output buffer. In order to guarantee\n  *                                    success, this should be at least\n  *                                    <code>1276</code> for a single frame,\n  *                                    or for multiple frames,\n  *                                    <code>1277*(end-begin)</code>.\n  *                                    However, <code>1*(end-begin)</code> plus\n  *                                    the size of all packet data submitted to\n  *                                    the repacketizer since the last call to\n  *                                    opus_repacketizer_init() or\n  *                                    opus_repacketizer_create() is also\n  *                                    sufficient, and possibly much smaller.\n  * @returns The total size of the output packet on success, or an error code\n  *          on failure.\n  * @retval #OPUS_BAD_ARG <code>[begin,end)</code> was an invalid range of\n  *                       frames (begin < 0, begin >= end, or end >\n  *                       opus_repacketizer_get_nb_frames()).\n  * @retval #OPUS_BUFFER_TOO_SMALL \\a maxlen was insufficient to contain the\n  *                                complete output packet.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT opus_int32 opus_repacketizer_out_range(OpusRepacketizer *rp, int begin, int end, unsigned char *data, opus_int32 maxlen) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(4);\n\n/** Return the total number of frames contained in packet data submitted to\n  * the repacketizer state so far via opus_repacketizer_cat() since the last\n  * call to opus_repacketizer_init() or opus_repacketizer_create().\n  * This defines the valid range of packets that can be extracted with\n  * opus_repacketizer_out_range() or opus_repacketizer_out().\n  * @param rp <tt>OpusRepacketizer*</tt>: The repacketizer state containing the\n  *                                       frames.\n  * @returns The total number of frames contained in the packet data submitted\n  *          to the repacketizer state.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_repacketizer_get_nb_frames(OpusRepacketizer *rp) OPUS_ARG_NONNULL(1);\n\n/** Construct a new packet from data previously submitted to the repacketizer\n  * state via opus_repacketizer_cat().\n  * This is a convenience routine that returns all the data submitted so far\n  * in a single packet.\n  * It is equivalent to calling\n  * @code\n  * opus_repacketizer_out_range(rp, 0, opus_repacketizer_get_nb_frames(rp),\n  *                             data, maxlen)\n  * @endcode\n  * @param rp <tt>OpusRepacketizer*</tt>: The repacketizer state from which to\n  *                                       construct the new packet.\n  * @param[out] data <tt>const unsigned char*</tt>: The buffer in which to\n  *                                                 store the output packet.\n  * @param maxlen <tt>opus_int32</tt>: The maximum number of bytes to store in\n  *                                    the output buffer. In order to guarantee\n  *                                    success, this should be at least\n  *                                    <code>1277*opus_repacketizer_get_nb_frames(rp)</code>.\n  *                                    However,\n  *                                    <code>1*opus_repacketizer_get_nb_frames(rp)</code>\n  *                                    plus the size of all packet data\n  *                                    submitted to the repacketizer since the\n  *                                    last call to opus_repacketizer_init() or\n  *                                    opus_repacketizer_create() is also\n  *                                    sufficient, and possibly much smaller.\n  * @returns The total size of the output packet on success, or an error code\n  *          on failure.\n  * @retval #OPUS_BUFFER_TOO_SMALL \\a maxlen was insufficient to contain the\n  *                                complete output packet.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT opus_int32 opus_repacketizer_out(OpusRepacketizer *rp, unsigned char *data, opus_int32 maxlen) OPUS_ARG_NONNULL(1);\n\n/** Pads a given Opus packet to a larger size (possibly changing the TOC sequence).\n  * @param[in,out] data <tt>const unsigned char*</tt>: The buffer containing the\n  *                                                   packet to pad.\n  * @param len <tt>opus_int32</tt>: The size of the packet.\n  *                                 This must be at least 1.\n  * @param new_len <tt>opus_int32</tt>: The desired size of the packet after padding.\n  *                                 This must be at least as large as len.\n  * @returns an error code\n  * @retval #OPUS_OK \\a on success.\n  * @retval #OPUS_BAD_ARG \\a len was less than 1 or new_len was less than len.\n  * @retval #OPUS_INVALID_PACKET \\a data did not contain a valid Opus packet.\n  */\nOPUS_EXPORT int opus_packet_pad(unsigned char *data, opus_int32 len, opus_int32 new_len);\n\n/** Remove all padding from a given Opus packet and rewrite the TOC sequence to\n  * minimize space usage.\n  * @param[in,out] data <tt>const unsigned char*</tt>: The buffer containing the\n  *                                                   packet to strip.\n  * @param len <tt>opus_int32</tt>: The size of the packet.\n  *                                 This must be at least 1.\n  * @returns The new size of the output packet on success, or an error code\n  *          on failure.\n  * @retval #OPUS_BAD_ARG \\a len was less than 1.\n  * @retval #OPUS_INVALID_PACKET \\a data did not contain a valid Opus packet.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT opus_int32 opus_packet_unpad(unsigned char *data, opus_int32 len);\n\n/** Pads a given Opus multi-stream packet to a larger size (possibly changing the TOC sequence).\n  * @param[in,out] data <tt>const unsigned char*</tt>: The buffer containing the\n  *                                                   packet to pad.\n  * @param len <tt>opus_int32</tt>: The size of the packet.\n  *                                 This must be at least 1.\n  * @param new_len <tt>opus_int32</tt>: The desired size of the packet after padding.\n  *                                 This must be at least 1.\n  * @param nb_streams <tt>opus_int32</tt>: The number of streams (not channels) in the packet.\n  *                                 This must be at least as large as len.\n  * @returns an error code\n  * @retval #OPUS_OK \\a on success.\n  * @retval #OPUS_BAD_ARG \\a len was less than 1.\n  * @retval #OPUS_INVALID_PACKET \\a data did not contain a valid Opus packet.\n  */\nOPUS_EXPORT int opus_multistream_packet_pad(unsigned char *data, opus_int32 len, opus_int32 new_len, int nb_streams);\n\n/** Remove all padding from a given Opus multi-stream packet and rewrite the TOC sequence to\n  * minimize space usage.\n  * @param[in,out] data <tt>const unsigned char*</tt>: The buffer containing the\n  *                                                   packet to strip.\n  * @param len <tt>opus_int32</tt>: The size of the packet.\n  *                                 This must be at least 1.\n  * @param nb_streams <tt>opus_int32</tt>: The number of streams (not channels) in the packet.\n  *                                 This must be at least 1.\n  * @returns The new size of the output packet on success, or an error code\n  *          on failure.\n  * @retval #OPUS_BAD_ARG \\a len was less than 1 or new_len was less than len.\n  * @retval #OPUS_INVALID_PACKET \\a data did not contain a valid Opus packet.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT opus_int32 opus_multistream_packet_unpad(unsigned char *data, opus_int32 len, int nb_streams);\n\n/**@}*/\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif /* OPUS_H */\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/include/opus_custom.h",
    "content": "/* Copyright (c) 2007-2008 CSIRO\n   Copyright (c) 2007-2009 Xiph.Org Foundation\n   Copyright (c) 2008-2012 Gregory Maxwell\n   Written by Jean-Marc Valin and Gregory Maxwell */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n/**\n  @file opus_custom.h\n  @brief Opus-Custom reference implementation API\n */\n\n#ifndef OPUS_CUSTOM_H\n#define OPUS_CUSTOM_H\n\n#include \"opus_defines.h\"\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#ifdef CUSTOM_MODES\n# define OPUS_CUSTOM_EXPORT OPUS_EXPORT\n# define OPUS_CUSTOM_EXPORT_STATIC OPUS_EXPORT\n#else\n# define OPUS_CUSTOM_EXPORT\n# ifdef OPUS_BUILD\n#  define OPUS_CUSTOM_EXPORT_STATIC static OPUS_INLINE\n# else\n#  define OPUS_CUSTOM_EXPORT_STATIC\n# endif\n#endif\n\n/** @defgroup opus_custom Opus Custom\n  * @{\n  *  Opus Custom is an optional part of the Opus specification and\n  * reference implementation which uses a distinct API from the regular\n  * API and supports frame sizes that are not normally supported.\\ Use\n  * of Opus Custom is discouraged for all but very special applications\n  * for which a frame size different from 2.5, 5, 10, or 20 ms is needed\n  * (for either complexity or latency reasons) and where interoperability\n  * is less important.\n  *\n  * In addition to the interoperability limitations the use of Opus custom\n  * disables a substantial chunk of the codec and generally lowers the\n  * quality available at a given bitrate. Normally when an application needs\n  * a different frame size from the codec it should buffer to match the\n  * sizes but this adds a small amount of delay which may be important\n  * in some very low latency applications. Some transports (especially\n  * constant rate RF transports) may also work best with frames of\n  * particular durations.\n  *\n  * Libopus only supports custom modes if they are enabled at compile time.\n  *\n  * The Opus Custom API is similar to the regular API but the\n  * @ref opus_encoder_create and @ref opus_decoder_create calls take\n  * an additional mode parameter which is a structure produced by\n  * a call to @ref opus_custom_mode_create. Both the encoder and decoder\n  * must create a mode using the same sample rate (fs) and frame size\n  * (frame size) so these parameters must either be signaled out of band\n  * or fixed in a particular implementation.\n  *\n  * Similar to regular Opus the custom modes support on the fly frame size\n  * switching, but the sizes available depend on the particular frame size in\n  * use. For some initial frame sizes on a single on the fly size is available.\n  */\n\n/** Contains the state of an encoder. One encoder state is needed\n    for each stream. It is initialized once at the beginning of the\n    stream. Do *not* re-initialize the state for every frame.\n   @brief Encoder state\n */\ntypedef struct OpusCustomEncoder OpusCustomEncoder;\n\n/** State of the decoder. One decoder state is needed for each stream.\n    It is initialized once at the beginning of the stream. Do *not*\n    re-initialize the state for every frame.\n   @brief Decoder state\n */\ntypedef struct OpusCustomDecoder OpusCustomDecoder;\n\n/** The mode contains all the information necessary to create an\n    encoder. Both the encoder and decoder need to be initialized\n    with exactly the same mode, otherwise the output will be\n    corrupted.\n   @brief Mode configuration\n */\ntypedef struct OpusCustomMode OpusCustomMode;\n\n/** Creates a new mode struct. This will be passed to an encoder or\n  * decoder. The mode MUST NOT BE DESTROYED until the encoders and\n  * decoders that use it are destroyed as well.\n  * @param [in] Fs <tt>int</tt>: Sampling rate (8000 to 96000 Hz)\n  * @param [in] frame_size <tt>int</tt>: Number of samples (per channel) to encode in each\n  *        packet (64 - 1024, prime factorization must contain zero or more 2s, 3s, or 5s and no other primes)\n  * @param [out] error <tt>int*</tt>: Returned error code (if NULL, no error will be returned)\n  * @return A newly created mode\n  */\nOPUS_CUSTOM_EXPORT OPUS_WARN_UNUSED_RESULT OpusCustomMode *opus_custom_mode_create(opus_int32 Fs, int frame_size, int *error);\n\n/** Destroys a mode struct. Only call this after all encoders and\n  * decoders using this mode are destroyed as well.\n  * @param [in] mode <tt>OpusCustomMode*</tt>: Mode to be freed.\n  */\nOPUS_CUSTOM_EXPORT void opus_custom_mode_destroy(OpusCustomMode *mode);\n\n\n#if !defined(OPUS_BUILD) || defined(CELT_ENCODER_C)\n\n/* Encoder */\n/** Gets the size of an OpusCustomEncoder structure.\n  * @param [in] mode <tt>OpusCustomMode *</tt>: Mode configuration\n  * @param [in] channels <tt>int</tt>: Number of channels\n  * @returns size\n  */\nOPUS_CUSTOM_EXPORT_STATIC OPUS_WARN_UNUSED_RESULT int opus_custom_encoder_get_size(\n    const OpusCustomMode *mode,\n    int channels\n) OPUS_ARG_NONNULL(1);\n\n# ifdef CUSTOM_MODES\n/** Initializes a previously allocated encoder state\n  * The memory pointed to by st must be the size returned by opus_custom_encoder_get_size.\n  * This is intended for applications which use their own allocator instead of malloc.\n  * @see opus_custom_encoder_create(),opus_custom_encoder_get_size()\n  * To reset a previously initialized state use the OPUS_RESET_STATE CTL.\n  * @param [in] st <tt>OpusCustomEncoder*</tt>: Encoder state\n  * @param [in] mode <tt>OpusCustomMode *</tt>: Contains all the information about the characteristics of\n  *  the stream (must be the same characteristics as used for the\n  *  decoder)\n  * @param [in] channels <tt>int</tt>: Number of channels\n  * @return OPUS_OK Success or @ref opus_errorcodes\n  */\nOPUS_CUSTOM_EXPORT int opus_custom_encoder_init(\n    OpusCustomEncoder *st,\n    const OpusCustomMode *mode,\n    int channels\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(2);\n# endif\n#endif\n\n\n/** Creates a new encoder state. Each stream needs its own encoder\n  * state (can't be shared across simultaneous streams).\n  * @param [in] mode <tt>OpusCustomMode*</tt>: Contains all the information about the characteristics of\n  *  the stream (must be the same characteristics as used for the\n  *  decoder)\n  * @param [in] channels <tt>int</tt>: Number of channels\n  * @param [out] error <tt>int*</tt>: Returns an error code\n  * @return Newly created encoder state.\n*/\nOPUS_CUSTOM_EXPORT OPUS_WARN_UNUSED_RESULT OpusCustomEncoder *opus_custom_encoder_create(\n    const OpusCustomMode *mode,\n    int channels,\n    int *error\n) OPUS_ARG_NONNULL(1);\n\n\n/** Destroys a an encoder state.\n  * @param[in] st <tt>OpusCustomEncoder*</tt>: State to be freed.\n  */\nOPUS_CUSTOM_EXPORT void opus_custom_encoder_destroy(OpusCustomEncoder *st);\n\n/** Encodes a frame of audio.\n  * @param [in] st <tt>OpusCustomEncoder*</tt>: Encoder state\n  * @param [in] pcm <tt>float*</tt>: PCM audio in float format, with a normal range of +/-1.0.\n  *          Samples with a range beyond +/-1.0 are supported but will\n  *          be clipped by decoders using the integer API and should\n  *          only be used if it is known that the far end supports\n  *          extended dynamic range. There must be exactly\n  *          frame_size samples per channel.\n  * @param [in] frame_size <tt>int</tt>: Number of samples per frame of input signal\n  * @param [out] compressed <tt>char *</tt>: The compressed data is written here. This may not alias pcm and must be at least maxCompressedBytes long.\n  * @param [in] maxCompressedBytes <tt>int</tt>: Maximum number of bytes to use for compressing the frame\n  *          (can change from one frame to another)\n  * @return Number of bytes written to \"compressed\".\n  *       If negative, an error has occurred (see error codes). It is IMPORTANT that\n  *       the length returned be somehow transmitted to the decoder. Otherwise, no\n  *       decoding is possible.\n  */\nOPUS_CUSTOM_EXPORT OPUS_WARN_UNUSED_RESULT int opus_custom_encode_float(\n    OpusCustomEncoder *st,\n    const float *pcm,\n    int frame_size,\n    unsigned char *compressed,\n    int maxCompressedBytes\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(2) OPUS_ARG_NONNULL(4);\n\n/** Encodes a frame of audio.\n  * @param [in] st <tt>OpusCustomEncoder*</tt>: Encoder state\n  * @param [in] pcm <tt>opus_int16*</tt>: PCM audio in signed 16-bit format (native endian).\n  *          There must be exactly frame_size samples per channel.\n  * @param [in] frame_size <tt>int</tt>: Number of samples per frame of input signal\n  * @param [out] compressed <tt>char *</tt>: The compressed data is written here. This may not alias pcm and must be at least maxCompressedBytes long.\n  * @param [in] maxCompressedBytes <tt>int</tt>: Maximum number of bytes to use for compressing the frame\n  *          (can change from one frame to another)\n  * @return Number of bytes written to \"compressed\".\n  *       If negative, an error has occurred (see error codes). It is IMPORTANT that\n  *       the length returned be somehow transmitted to the decoder. Otherwise, no\n  *       decoding is possible.\n */\nOPUS_CUSTOM_EXPORT OPUS_WARN_UNUSED_RESULT int opus_custom_encode(\n    OpusCustomEncoder *st,\n    const opus_int16 *pcm,\n    int frame_size,\n    unsigned char *compressed,\n    int maxCompressedBytes\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(2) OPUS_ARG_NONNULL(4);\n\n/** Perform a CTL function on an Opus custom encoder.\n  *\n  * Generally the request and subsequent arguments are generated\n  * by a convenience macro.\n  * @see opus_encoderctls\n  */\nOPUS_CUSTOM_EXPORT int opus_custom_encoder_ctl(OpusCustomEncoder * OPUS_RESTRICT st, int request, ...) OPUS_ARG_NONNULL(1);\n\n\n#if !defined(OPUS_BUILD) || defined(CELT_DECODER_C)\n/* Decoder */\n\n/** Gets the size of an OpusCustomDecoder structure.\n  * @param [in] mode <tt>OpusCustomMode *</tt>: Mode configuration\n  * @param [in] channels <tt>int</tt>: Number of channels\n  * @returns size\n  */\nOPUS_CUSTOM_EXPORT_STATIC OPUS_WARN_UNUSED_RESULT int opus_custom_decoder_get_size(\n    const OpusCustomMode *mode,\n    int channels\n) OPUS_ARG_NONNULL(1);\n\n/** Initializes a previously allocated decoder state\n  * The memory pointed to by st must be the size returned by opus_custom_decoder_get_size.\n  * This is intended for applications which use their own allocator instead of malloc.\n  * @see opus_custom_decoder_create(),opus_custom_decoder_get_size()\n  * To reset a previously initialized state use the OPUS_RESET_STATE CTL.\n  * @param [in] st <tt>OpusCustomDecoder*</tt>: Decoder state\n  * @param [in] mode <tt>OpusCustomMode *</tt>: Contains all the information about the characteristics of\n  *  the stream (must be the same characteristics as used for the\n  *  encoder)\n  * @param [in] channels <tt>int</tt>: Number of channels\n  * @return OPUS_OK Success or @ref opus_errorcodes\n  */\nOPUS_CUSTOM_EXPORT_STATIC int opus_custom_decoder_init(\n    OpusCustomDecoder *st,\n    const OpusCustomMode *mode,\n    int channels\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(2);\n\n#endif\n\n\n/** Creates a new decoder state. Each stream needs its own decoder state (can't\n  * be shared across simultaneous streams).\n  * @param [in] mode <tt>OpusCustomMode</tt>: Contains all the information about the characteristics of the\n  *          stream (must be the same characteristics as used for the encoder)\n  * @param [in] channels <tt>int</tt>: Number of channels\n  * @param [out] error <tt>int*</tt>: Returns an error code\n  * @return Newly created decoder state.\n  */\nOPUS_CUSTOM_EXPORT OPUS_WARN_UNUSED_RESULT OpusCustomDecoder *opus_custom_decoder_create(\n    const OpusCustomMode *mode,\n    int channels,\n    int *error\n) OPUS_ARG_NONNULL(1);\n\n/** Destroys a an decoder state.\n  * @param[in] st <tt>OpusCustomDecoder*</tt>: State to be freed.\n  */\nOPUS_CUSTOM_EXPORT void opus_custom_decoder_destroy(OpusCustomDecoder *st);\n\n/** Decode an opus custom frame with floating point output\n  * @param [in] st <tt>OpusCustomDecoder*</tt>: Decoder state\n  * @param [in] data <tt>char*</tt>: Input payload. Use a NULL pointer to indicate packet loss\n  * @param [in] len <tt>int</tt>: Number of bytes in payload\n  * @param [out] pcm <tt>float*</tt>: Output signal (interleaved if 2 channels). length\n  *  is frame_size*channels*sizeof(float)\n  * @param [in] frame_size Number of samples per channel of available space in *pcm.\n  * @returns Number of decoded samples or @ref opus_errorcodes\n  */\nOPUS_CUSTOM_EXPORT OPUS_WARN_UNUSED_RESULT int opus_custom_decode_float(\n    OpusCustomDecoder *st,\n    const unsigned char *data,\n    int len,\n    float *pcm,\n    int frame_size\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(4);\n\n/** Decode an opus custom frame\n  * @param [in] st <tt>OpusCustomDecoder*</tt>: Decoder state\n  * @param [in] data <tt>char*</tt>: Input payload. Use a NULL pointer to indicate packet loss\n  * @param [in] len <tt>int</tt>: Number of bytes in payload\n  * @param [out] pcm <tt>opus_int16*</tt>: Output signal (interleaved if 2 channels). length\n  *  is frame_size*channels*sizeof(opus_int16)\n  * @param [in] frame_size Number of samples per channel of available space in *pcm.\n  * @returns Number of decoded samples or @ref opus_errorcodes\n  */\nOPUS_CUSTOM_EXPORT OPUS_WARN_UNUSED_RESULT int opus_custom_decode(\n    OpusCustomDecoder *st,\n    const unsigned char *data,\n    int len,\n    opus_int16 *pcm,\n    int frame_size\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(4);\n\n/** Perform a CTL function on an Opus custom decoder.\n  *\n  * Generally the request and subsequent arguments are generated\n  * by a convenience macro.\n  * @see opus_genericctls\n  */\nOPUS_CUSTOM_EXPORT int opus_custom_decoder_ctl(OpusCustomDecoder * OPUS_RESTRICT st, int request, ...) OPUS_ARG_NONNULL(1);\n\n/**@}*/\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif /* OPUS_CUSTOM_H */\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/include/opus_defines.h",
    "content": "/* Copyright (c) 2010-2011 Xiph.Org Foundation, Skype Limited\n   Written by Jean-Marc Valin and Koen Vos */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n/**\n * @file opus_defines.h\n * @brief Opus reference implementation constants\n */\n\n#ifndef OPUS_DEFINES_H\n#define OPUS_DEFINES_H\n\n#include \"opus_types.h\"\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/** @defgroup opus_errorcodes Error codes\n * @{\n */\n/** No error @hideinitializer*/\n#define OPUS_OK                0\n/** One or more invalid/out of range arguments @hideinitializer*/\n#define OPUS_BAD_ARG          -1\n/** Not enough bytes allocated in the buffer @hideinitializer*/\n#define OPUS_BUFFER_TOO_SMALL -2\n/** An internal error was detected @hideinitializer*/\n#define OPUS_INTERNAL_ERROR   -3\n/** The compressed data passed is corrupted @hideinitializer*/\n#define OPUS_INVALID_PACKET   -4\n/** Invalid/unsupported request number @hideinitializer*/\n#define OPUS_UNIMPLEMENTED    -5\n/** An encoder or decoder structure is invalid or already freed @hideinitializer*/\n#define OPUS_INVALID_STATE    -6\n/** Memory allocation has failed @hideinitializer*/\n#define OPUS_ALLOC_FAIL       -7\n/**@}*/\n\n/** @cond OPUS_INTERNAL_DOC */\n/**Export control for opus functions */\n\n#ifndef OPUS_EXPORT\n# if defined(WIN32)\n#  if defined(OPUS_BUILD) && defined(DLL_EXPORT)\n#   define OPUS_EXPORT __declspec(dllexport)\n#  else\n#   define OPUS_EXPORT\n#  endif\n# elif defined(__GNUC__) && defined(OPUS_BUILD)\n#  define OPUS_EXPORT __attribute__ ((visibility (\"default\")))\n# else\n#  define OPUS_EXPORT\n# endif\n#endif\n\n# if !defined(OPUS_GNUC_PREREQ)\n#  if defined(__GNUC__)&&defined(__GNUC_MINOR__)\n#   define OPUS_GNUC_PREREQ(_maj,_min) \\\n ((__GNUC__<<16)+__GNUC_MINOR__>=((_maj)<<16)+(_min))\n#  else\n#   define OPUS_GNUC_PREREQ(_maj,_min) 0\n#  endif\n# endif\n\n#if (!defined(__STDC_VERSION__) || (__STDC_VERSION__ < 199901L) )\n# if OPUS_GNUC_PREREQ(3,0)\n#  define OPUS_RESTRICT __restrict__\n# elif (defined(_MSC_VER) && _MSC_VER >= 1400)\n#  define OPUS_RESTRICT __restrict\n# else\n#  define OPUS_RESTRICT\n# endif\n#else\n# define OPUS_RESTRICT restrict\n#endif\n\n#if (!defined(__STDC_VERSION__) || (__STDC_VERSION__ < 199901L) )\n# if OPUS_GNUC_PREREQ(2,7)\n#  define OPUS_INLINE __inline__\n# elif (defined(_MSC_VER))\n#  define OPUS_INLINE __inline\n# else\n#  define OPUS_INLINE\n# endif\n#else\n# define OPUS_INLINE inline\n#endif\n\n/**Warning attributes for opus functions\n  * NONNULL is not used in OPUS_BUILD to avoid the compiler optimizing out\n  * some paranoid null checks. */\n#if defined(__GNUC__) && OPUS_GNUC_PREREQ(3, 4)\n# define OPUS_WARN_UNUSED_RESULT __attribute__ ((__warn_unused_result__))\n#else\n# define OPUS_WARN_UNUSED_RESULT\n#endif\n#if !defined(OPUS_BUILD) && defined(__GNUC__) && OPUS_GNUC_PREREQ(3, 4)\n# define OPUS_ARG_NONNULL(_x)  __attribute__ ((__nonnull__(_x)))\n#else\n# define OPUS_ARG_NONNULL(_x)\n#endif\n\n/** These are the actual Encoder CTL ID numbers.\n  * They should not be used directly by applications.\n  * In general, SETs should be even and GETs should be odd.*/\n#define OPUS_SET_APPLICATION_REQUEST         4000\n#define OPUS_GET_APPLICATION_REQUEST         4001\n#define OPUS_SET_BITRATE_REQUEST             4002\n#define OPUS_GET_BITRATE_REQUEST             4003\n#define OPUS_SET_MAX_BANDWIDTH_REQUEST       4004\n#define OPUS_GET_MAX_BANDWIDTH_REQUEST       4005\n#define OPUS_SET_VBR_REQUEST                 4006\n#define OPUS_GET_VBR_REQUEST                 4007\n#define OPUS_SET_BANDWIDTH_REQUEST           4008\n#define OPUS_GET_BANDWIDTH_REQUEST           4009\n#define OPUS_SET_COMPLEXITY_REQUEST          4010\n#define OPUS_GET_COMPLEXITY_REQUEST          4011\n#define OPUS_SET_INBAND_FEC_REQUEST          4012\n#define OPUS_GET_INBAND_FEC_REQUEST          4013\n#define OPUS_SET_PACKET_LOSS_PERC_REQUEST    4014\n#define OPUS_GET_PACKET_LOSS_PERC_REQUEST    4015\n#define OPUS_SET_DTX_REQUEST                 4016\n#define OPUS_GET_DTX_REQUEST                 4017\n#define OPUS_SET_VBR_CONSTRAINT_REQUEST      4020\n#define OPUS_GET_VBR_CONSTRAINT_REQUEST      4021\n#define OPUS_SET_FORCE_CHANNELS_REQUEST      4022\n#define OPUS_GET_FORCE_CHANNELS_REQUEST      4023\n#define OPUS_SET_SIGNAL_REQUEST              4024\n#define OPUS_GET_SIGNAL_REQUEST              4025\n#define OPUS_GET_LOOKAHEAD_REQUEST           4027\n/* #define OPUS_RESET_STATE 4028 */\n#define OPUS_GET_SAMPLE_RATE_REQUEST         4029\n#define OPUS_GET_FINAL_RANGE_REQUEST         4031\n#define OPUS_GET_PITCH_REQUEST               4033\n#define OPUS_SET_GAIN_REQUEST                4034\n#define OPUS_GET_GAIN_REQUEST                4045 /* Should have been 4035 */\n#define OPUS_SET_LSB_DEPTH_REQUEST           4036\n#define OPUS_GET_LSB_DEPTH_REQUEST           4037\n#define OPUS_GET_LAST_PACKET_DURATION_REQUEST 4039\n#define OPUS_SET_EXPERT_FRAME_DURATION_REQUEST 4040\n#define OPUS_GET_EXPERT_FRAME_DURATION_REQUEST 4041\n#define OPUS_SET_PREDICTION_DISABLED_REQUEST 4042\n#define OPUS_GET_PREDICTION_DISABLED_REQUEST 4043\n/* Don't use 4045, it's already taken by OPUS_GET_GAIN_REQUEST */\n#define OPUS_SET_PHASE_INVERSION_DISABLED_REQUEST 4046\n#define OPUS_GET_PHASE_INVERSION_DISABLED_REQUEST 4047\n\n/* Macros to trigger compilation errors when the wrong types are provided to a CTL */\n#define __opus_check_int(x) (((void)((x) == (opus_int32)0)), (opus_int32)(x))\n#define __opus_check_int_ptr(ptr) ((ptr) + ((ptr) - (opus_int32*)(ptr)))\n#define __opus_check_uint_ptr(ptr) ((ptr) + ((ptr) - (opus_uint32*)(ptr)))\n#define __opus_check_val16_ptr(ptr) ((ptr) + ((ptr) - (opus_val16*)(ptr)))\n/** @endcond */\n\n/** @defgroup opus_ctlvalues Pre-defined values for CTL interface\n  * @see opus_genericctls, opus_encoderctls\n  * @{\n  */\n/* Values for the various encoder CTLs */\n#define OPUS_AUTO                           -1000 /**<Auto/default setting @hideinitializer*/\n#define OPUS_BITRATE_MAX                       -1 /**<Maximum bitrate @hideinitializer*/\n\n/** Best for most VoIP/videoconference applications where listening quality and intelligibility matter most\n * @hideinitializer */\n#define OPUS_APPLICATION_VOIP                2048\n/** Best for broadcast/high-fidelity application where the decoded audio should be as close as possible to the input\n * @hideinitializer */\n#define OPUS_APPLICATION_AUDIO               2049\n/** Only use when lowest-achievable latency is what matters most. Voice-optimized modes cannot be used.\n * @hideinitializer */\n#define OPUS_APPLICATION_RESTRICTED_LOWDELAY 2051\n\n#define OPUS_SIGNAL_VOICE                    3001 /**< Signal being encoded is voice */\n#define OPUS_SIGNAL_MUSIC                    3002 /**< Signal being encoded is music */\n#define OPUS_BANDWIDTH_NARROWBAND            1101 /**< 4 kHz bandpass @hideinitializer*/\n#define OPUS_BANDWIDTH_MEDIUMBAND            1102 /**< 6 kHz bandpass @hideinitializer*/\n#define OPUS_BANDWIDTH_WIDEBAND              1103 /**< 8 kHz bandpass @hideinitializer*/\n#define OPUS_BANDWIDTH_SUPERWIDEBAND         1104 /**<12 kHz bandpass @hideinitializer*/\n#define OPUS_BANDWIDTH_FULLBAND              1105 /**<20 kHz bandpass @hideinitializer*/\n\n#define OPUS_FRAMESIZE_ARG                   5000 /**< Select frame size from the argument (default) */\n#define OPUS_FRAMESIZE_2_5_MS                5001 /**< Use 2.5 ms frames */\n#define OPUS_FRAMESIZE_5_MS                  5002 /**< Use 5 ms frames */\n#define OPUS_FRAMESIZE_10_MS                 5003 /**< Use 10 ms frames */\n#define OPUS_FRAMESIZE_20_MS                 5004 /**< Use 20 ms frames */\n#define OPUS_FRAMESIZE_40_MS                 5005 /**< Use 40 ms frames */\n#define OPUS_FRAMESIZE_60_MS                 5006 /**< Use 60 ms frames */\n#define OPUS_FRAMESIZE_80_MS                 5007 /**< Use 80 ms frames */\n#define OPUS_FRAMESIZE_100_MS                5008 /**< Use 100 ms frames */\n#define OPUS_FRAMESIZE_120_MS                5009 /**< Use 120 ms frames */\n\n/**@}*/\n\n\n/** @defgroup opus_encoderctls Encoder related CTLs\n  *\n  * These are convenience macros for use with the \\c opus_encode_ctl\n  * interface. They are used to generate the appropriate series of\n  * arguments for that call, passing the correct type, size and so\n  * on as expected for each particular request.\n  *\n  * Some usage examples:\n  *\n  * @code\n  * int ret;\n  * ret = opus_encoder_ctl(enc_ctx, OPUS_SET_BANDWIDTH(OPUS_AUTO));\n  * if (ret != OPUS_OK) return ret;\n  *\n  * opus_int32 rate;\n  * opus_encoder_ctl(enc_ctx, OPUS_GET_BANDWIDTH(&rate));\n  *\n  * opus_encoder_ctl(enc_ctx, OPUS_RESET_STATE);\n  * @endcode\n  *\n  * @see opus_genericctls, opus_encoder\n  * @{\n  */\n\n/** Configures the encoder's computational complexity.\n  * The supported range is 0-10 inclusive with 10 representing the highest complexity.\n  * @see OPUS_GET_COMPLEXITY\n  * @param[in] x <tt>opus_int32</tt>: Allowed values: 0-10, inclusive.\n  *\n  * @hideinitializer */\n#define OPUS_SET_COMPLEXITY(x) OPUS_SET_COMPLEXITY_REQUEST, __opus_check_int(x)\n/** Gets the encoder's complexity configuration.\n  * @see OPUS_SET_COMPLEXITY\n  * @param[out] x <tt>opus_int32 *</tt>: Returns a value in the range 0-10,\n  *                                      inclusive.\n  * @hideinitializer */\n#define OPUS_GET_COMPLEXITY(x) OPUS_GET_COMPLEXITY_REQUEST, __opus_check_int_ptr(x)\n\n/** Configures the bitrate in the encoder.\n  * Rates from 500 to 512000 bits per second are meaningful, as well as the\n  * special values #OPUS_AUTO and #OPUS_BITRATE_MAX.\n  * The value #OPUS_BITRATE_MAX can be used to cause the codec to use as much\n  * rate as it can, which is useful for controlling the rate by adjusting the\n  * output buffer size.\n  * @see OPUS_GET_BITRATE\n  * @param[in] x <tt>opus_int32</tt>: Bitrate in bits per second. The default\n  *                                   is determined based on the number of\n  *                                   channels and the input sampling rate.\n  * @hideinitializer */\n#define OPUS_SET_BITRATE(x) OPUS_SET_BITRATE_REQUEST, __opus_check_int(x)\n/** Gets the encoder's bitrate configuration.\n  * @see OPUS_SET_BITRATE\n  * @param[out] x <tt>opus_int32 *</tt>: Returns the bitrate in bits per second.\n  *                                      The default is determined based on the\n  *                                      number of channels and the input\n  *                                      sampling rate.\n  * @hideinitializer */\n#define OPUS_GET_BITRATE(x) OPUS_GET_BITRATE_REQUEST, __opus_check_int_ptr(x)\n\n/** Enables or disables variable bitrate (VBR) in the encoder.\n  * The configured bitrate may not be met exactly because frames must\n  * be an integer number of bytes in length.\n  * @see OPUS_GET_VBR\n  * @see OPUS_SET_VBR_CONSTRAINT\n  * @param[in] x <tt>opus_int32</tt>: Allowed values:\n  * <dl>\n  * <dt>0</dt><dd>Hard CBR. For LPC/hybrid modes at very low bit-rate, this can\n  *               cause noticeable quality degradation.</dd>\n  * <dt>1</dt><dd>VBR (default). The exact type of VBR is controlled by\n  *               #OPUS_SET_VBR_CONSTRAINT.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_SET_VBR(x) OPUS_SET_VBR_REQUEST, __opus_check_int(x)\n/** Determine if variable bitrate (VBR) is enabled in the encoder.\n  * @see OPUS_SET_VBR\n  * @see OPUS_GET_VBR_CONSTRAINT\n  * @param[out] x <tt>opus_int32 *</tt>: Returns one of the following values:\n  * <dl>\n  * <dt>0</dt><dd>Hard CBR.</dd>\n  * <dt>1</dt><dd>VBR (default). The exact type of VBR may be retrieved via\n  *               #OPUS_GET_VBR_CONSTRAINT.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_GET_VBR(x) OPUS_GET_VBR_REQUEST, __opus_check_int_ptr(x)\n\n/** Enables or disables constrained VBR in the encoder.\n  * This setting is ignored when the encoder is in CBR mode.\n  * @warning Only the MDCT mode of Opus currently heeds the constraint.\n  *  Speech mode ignores it completely, hybrid mode may fail to obey it\n  *  if the LPC layer uses more bitrate than the constraint would have\n  *  permitted.\n  * @see OPUS_GET_VBR_CONSTRAINT\n  * @see OPUS_SET_VBR\n  * @param[in] x <tt>opus_int32</tt>: Allowed values:\n  * <dl>\n  * <dt>0</dt><dd>Unconstrained VBR.</dd>\n  * <dt>1</dt><dd>Constrained VBR (default). This creates a maximum of one\n  *               frame of buffering delay assuming a transport with a\n  *               serialization speed of the nominal bitrate.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_SET_VBR_CONSTRAINT(x) OPUS_SET_VBR_CONSTRAINT_REQUEST, __opus_check_int(x)\n/** Determine if constrained VBR is enabled in the encoder.\n  * @see OPUS_SET_VBR_CONSTRAINT\n  * @see OPUS_GET_VBR\n  * @param[out] x <tt>opus_int32 *</tt>: Returns one of the following values:\n  * <dl>\n  * <dt>0</dt><dd>Unconstrained VBR.</dd>\n  * <dt>1</dt><dd>Constrained VBR (default).</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_GET_VBR_CONSTRAINT(x) OPUS_GET_VBR_CONSTRAINT_REQUEST, __opus_check_int_ptr(x)\n\n/** Configures mono/stereo forcing in the encoder.\n  * This can force the encoder to produce packets encoded as either mono or\n  * stereo, regardless of the format of the input audio. This is useful when\n  * the caller knows that the input signal is currently a mono source embedded\n  * in a stereo stream.\n  * @see OPUS_GET_FORCE_CHANNELS\n  * @param[in] x <tt>opus_int32</tt>: Allowed values:\n  * <dl>\n  * <dt>#OPUS_AUTO</dt><dd>Not forced (default)</dd>\n  * <dt>1</dt>         <dd>Forced mono</dd>\n  * <dt>2</dt>         <dd>Forced stereo</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_SET_FORCE_CHANNELS(x) OPUS_SET_FORCE_CHANNELS_REQUEST, __opus_check_int(x)\n/** Gets the encoder's forced channel configuration.\n  * @see OPUS_SET_FORCE_CHANNELS\n  * @param[out] x <tt>opus_int32 *</tt>:\n  * <dl>\n  * <dt>#OPUS_AUTO</dt><dd>Not forced (default)</dd>\n  * <dt>1</dt>         <dd>Forced mono</dd>\n  * <dt>2</dt>         <dd>Forced stereo</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_GET_FORCE_CHANNELS(x) OPUS_GET_FORCE_CHANNELS_REQUEST, __opus_check_int_ptr(x)\n\n/** Configures the maximum bandpass that the encoder will select automatically.\n  * Applications should normally use this instead of #OPUS_SET_BANDWIDTH\n  * (leaving that set to the default, #OPUS_AUTO). This allows the\n  * application to set an upper bound based on the type of input it is\n  * providing, but still gives the encoder the freedom to reduce the bandpass\n  * when the bitrate becomes too low, for better overall quality.\n  * @see OPUS_GET_MAX_BANDWIDTH\n  * @param[in] x <tt>opus_int32</tt>: Allowed values:\n  * <dl>\n  * <dt>OPUS_BANDWIDTH_NARROWBAND</dt>    <dd>4 kHz passband</dd>\n  * <dt>OPUS_BANDWIDTH_MEDIUMBAND</dt>    <dd>6 kHz passband</dd>\n  * <dt>OPUS_BANDWIDTH_WIDEBAND</dt>      <dd>8 kHz passband</dd>\n  * <dt>OPUS_BANDWIDTH_SUPERWIDEBAND</dt><dd>12 kHz passband</dd>\n  * <dt>OPUS_BANDWIDTH_FULLBAND</dt>     <dd>20 kHz passband (default)</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_SET_MAX_BANDWIDTH(x) OPUS_SET_MAX_BANDWIDTH_REQUEST, __opus_check_int(x)\n\n/** Gets the encoder's configured maximum allowed bandpass.\n  * @see OPUS_SET_MAX_BANDWIDTH\n  * @param[out] x <tt>opus_int32 *</tt>: Allowed values:\n  * <dl>\n  * <dt>#OPUS_BANDWIDTH_NARROWBAND</dt>    <dd>4 kHz passband</dd>\n  * <dt>#OPUS_BANDWIDTH_MEDIUMBAND</dt>    <dd>6 kHz passband</dd>\n  * <dt>#OPUS_BANDWIDTH_WIDEBAND</dt>      <dd>8 kHz passband</dd>\n  * <dt>#OPUS_BANDWIDTH_SUPERWIDEBAND</dt><dd>12 kHz passband</dd>\n  * <dt>#OPUS_BANDWIDTH_FULLBAND</dt>     <dd>20 kHz passband (default)</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_GET_MAX_BANDWIDTH(x) OPUS_GET_MAX_BANDWIDTH_REQUEST, __opus_check_int_ptr(x)\n\n/** Sets the encoder's bandpass to a specific value.\n  * This prevents the encoder from automatically selecting the bandpass based\n  * on the available bitrate. If an application knows the bandpass of the input\n  * audio it is providing, it should normally use #OPUS_SET_MAX_BANDWIDTH\n  * instead, which still gives the encoder the freedom to reduce the bandpass\n  * when the bitrate becomes too low, for better overall quality.\n  * @see OPUS_GET_BANDWIDTH\n  * @param[in] x <tt>opus_int32</tt>: Allowed values:\n  * <dl>\n  * <dt>#OPUS_AUTO</dt>                    <dd>(default)</dd>\n  * <dt>#OPUS_BANDWIDTH_NARROWBAND</dt>    <dd>4 kHz passband</dd>\n  * <dt>#OPUS_BANDWIDTH_MEDIUMBAND</dt>    <dd>6 kHz passband</dd>\n  * <dt>#OPUS_BANDWIDTH_WIDEBAND</dt>      <dd>8 kHz passband</dd>\n  * <dt>#OPUS_BANDWIDTH_SUPERWIDEBAND</dt><dd>12 kHz passband</dd>\n  * <dt>#OPUS_BANDWIDTH_FULLBAND</dt>     <dd>20 kHz passband</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_SET_BANDWIDTH(x) OPUS_SET_BANDWIDTH_REQUEST, __opus_check_int(x)\n\n/** Configures the type of signal being encoded.\n  * This is a hint which helps the encoder's mode selection.\n  * @see OPUS_GET_SIGNAL\n  * @param[in] x <tt>opus_int32</tt>: Allowed values:\n  * <dl>\n  * <dt>#OPUS_AUTO</dt>        <dd>(default)</dd>\n  * <dt>#OPUS_SIGNAL_VOICE</dt><dd>Bias thresholds towards choosing LPC or Hybrid modes.</dd>\n  * <dt>#OPUS_SIGNAL_MUSIC</dt><dd>Bias thresholds towards choosing MDCT modes.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_SET_SIGNAL(x) OPUS_SET_SIGNAL_REQUEST, __opus_check_int(x)\n/** Gets the encoder's configured signal type.\n  * @see OPUS_SET_SIGNAL\n  * @param[out] x <tt>opus_int32 *</tt>: Returns one of the following values:\n  * <dl>\n  * <dt>#OPUS_AUTO</dt>        <dd>(default)</dd>\n  * <dt>#OPUS_SIGNAL_VOICE</dt><dd>Bias thresholds towards choosing LPC or Hybrid modes.</dd>\n  * <dt>#OPUS_SIGNAL_MUSIC</dt><dd>Bias thresholds towards choosing MDCT modes.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_GET_SIGNAL(x) OPUS_GET_SIGNAL_REQUEST, __opus_check_int_ptr(x)\n\n\n/** Configures the encoder's intended application.\n  * The initial value is a mandatory argument to the encoder_create function.\n  * @see OPUS_GET_APPLICATION\n  * @param[in] x <tt>opus_int32</tt>: Returns one of the following values:\n  * <dl>\n  * <dt>#OPUS_APPLICATION_VOIP</dt>\n  * <dd>Process signal for improved speech intelligibility.</dd>\n  * <dt>#OPUS_APPLICATION_AUDIO</dt>\n  * <dd>Favor faithfulness to the original input.</dd>\n  * <dt>#OPUS_APPLICATION_RESTRICTED_LOWDELAY</dt>\n  * <dd>Configure the minimum possible coding delay by disabling certain modes\n  * of operation.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_SET_APPLICATION(x) OPUS_SET_APPLICATION_REQUEST, __opus_check_int(x)\n/** Gets the encoder's configured application.\n  * @see OPUS_SET_APPLICATION\n  * @param[out] x <tt>opus_int32 *</tt>: Returns one of the following values:\n  * <dl>\n  * <dt>#OPUS_APPLICATION_VOIP</dt>\n  * <dd>Process signal for improved speech intelligibility.</dd>\n  * <dt>#OPUS_APPLICATION_AUDIO</dt>\n  * <dd>Favor faithfulness to the original input.</dd>\n  * <dt>#OPUS_APPLICATION_RESTRICTED_LOWDELAY</dt>\n  * <dd>Configure the minimum possible coding delay by disabling certain modes\n  * of operation.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_GET_APPLICATION(x) OPUS_GET_APPLICATION_REQUEST, __opus_check_int_ptr(x)\n\n/** Gets the total samples of delay added by the entire codec.\n  * This can be queried by the encoder and then the provided number of samples can be\n  * skipped on from the start of the decoder's output to provide time aligned input\n  * and output. From the perspective of a decoding application the real data begins this many\n  * samples late.\n  *\n  * The decoder contribution to this delay is identical for all decoders, but the\n  * encoder portion of the delay may vary from implementation to implementation,\n  * version to version, or even depend on the encoder's initial configuration.\n  * Applications needing delay compensation should call this CTL rather than\n  * hard-coding a value.\n  * @param[out] x <tt>opus_int32 *</tt>:   Number of lookahead samples\n  * @hideinitializer */\n#define OPUS_GET_LOOKAHEAD(x) OPUS_GET_LOOKAHEAD_REQUEST, __opus_check_int_ptr(x)\n\n/** Configures the encoder's use of inband forward error correction (FEC).\n  * @note This is only applicable to the LPC layer\n  * @see OPUS_GET_INBAND_FEC\n  * @param[in] x <tt>opus_int32</tt>: Allowed values:\n  * <dl>\n  * <dt>0</dt><dd>Disable inband FEC (default).</dd>\n  * <dt>1</dt><dd>Enable inband FEC.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_SET_INBAND_FEC(x) OPUS_SET_INBAND_FEC_REQUEST, __opus_check_int(x)\n/** Gets encoder's configured use of inband forward error correction.\n  * @see OPUS_SET_INBAND_FEC\n  * @param[out] x <tt>opus_int32 *</tt>: Returns one of the following values:\n  * <dl>\n  * <dt>0</dt><dd>Inband FEC disabled (default).</dd>\n  * <dt>1</dt><dd>Inband FEC enabled.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_GET_INBAND_FEC(x) OPUS_GET_INBAND_FEC_REQUEST, __opus_check_int_ptr(x)\n\n/** Configures the encoder's expected packet loss percentage.\n  * Higher values trigger progressively more loss resistant behavior in the encoder\n  * at the expense of quality at a given bitrate in the absence of packet loss, but\n  * greater quality under loss.\n  * @see OPUS_GET_PACKET_LOSS_PERC\n  * @param[in] x <tt>opus_int32</tt>:   Loss percentage in the range 0-100, inclusive (default: 0).\n  * @hideinitializer */\n#define OPUS_SET_PACKET_LOSS_PERC(x) OPUS_SET_PACKET_LOSS_PERC_REQUEST, __opus_check_int(x)\n/** Gets the encoder's configured packet loss percentage.\n  * @see OPUS_SET_PACKET_LOSS_PERC\n  * @param[out] x <tt>opus_int32 *</tt>: Returns the configured loss percentage\n  *                                      in the range 0-100, inclusive (default: 0).\n  * @hideinitializer */\n#define OPUS_GET_PACKET_LOSS_PERC(x) OPUS_GET_PACKET_LOSS_PERC_REQUEST, __opus_check_int_ptr(x)\n\n/** Configures the encoder's use of discontinuous transmission (DTX).\n  * @note This is only applicable to the LPC layer\n  * @see OPUS_GET_DTX\n  * @param[in] x <tt>opus_int32</tt>: Allowed values:\n  * <dl>\n  * <dt>0</dt><dd>Disable DTX (default).</dd>\n  * <dt>1</dt><dd>Enabled DTX.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_SET_DTX(x) OPUS_SET_DTX_REQUEST, __opus_check_int(x)\n/** Gets encoder's configured use of discontinuous transmission.\n  * @see OPUS_SET_DTX\n  * @param[out] x <tt>opus_int32 *</tt>: Returns one of the following values:\n  * <dl>\n  * <dt>0</dt><dd>DTX disabled (default).</dd>\n  * <dt>1</dt><dd>DTX enabled.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_GET_DTX(x) OPUS_GET_DTX_REQUEST, __opus_check_int_ptr(x)\n/** Configures the depth of signal being encoded.\n  *\n  * This is a hint which helps the encoder identify silence and near-silence.\n  * It represents the number of significant bits of linear intensity below\n  * which the signal contains ignorable quantization or other noise.\n  *\n  * For example, OPUS_SET_LSB_DEPTH(14) would be an appropriate setting\n  * for G.711 u-law input. OPUS_SET_LSB_DEPTH(16) would be appropriate\n  * for 16-bit linear pcm input with opus_encode_float().\n  *\n  * When using opus_encode() instead of opus_encode_float(), or when libopus\n  * is compiled for fixed-point, the encoder uses the minimum of the value\n  * set here and the value 16.\n  *\n  * @see OPUS_GET_LSB_DEPTH\n  * @param[in] x <tt>opus_int32</tt>: Input precision in bits, between 8 and 24\n  *                                   (default: 24).\n  * @hideinitializer */\n#define OPUS_SET_LSB_DEPTH(x) OPUS_SET_LSB_DEPTH_REQUEST, __opus_check_int(x)\n/** Gets the encoder's configured signal depth.\n  * @see OPUS_SET_LSB_DEPTH\n  * @param[out] x <tt>opus_int32 *</tt>: Input precision in bits, between 8 and\n  *                                      24 (default: 24).\n  * @hideinitializer */\n#define OPUS_GET_LSB_DEPTH(x) OPUS_GET_LSB_DEPTH_REQUEST, __opus_check_int_ptr(x)\n\n/** Configures the encoder's use of variable duration frames.\n  * When variable duration is enabled, the encoder is free to use a shorter frame\n  * size than the one requested in the opus_encode*() call.\n  * It is then the user's responsibility\n  * to verify how much audio was encoded by checking the ToC byte of the encoded\n  * packet. The part of the audio that was not encoded needs to be resent to the\n  * encoder for the next call. Do not use this option unless you <b>really</b>\n  * know what you are doing.\n  * @see OPUS_GET_EXPERT_FRAME_DURATION\n  * @param[in] x <tt>opus_int32</tt>: Allowed values:\n  * <dl>\n  * <dt>OPUS_FRAMESIZE_ARG</dt><dd>Select frame size from the argument (default).</dd>\n  * <dt>OPUS_FRAMESIZE_2_5_MS</dt><dd>Use 2.5 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_5_MS</dt><dd>Use 5 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_10_MS</dt><dd>Use 10 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_20_MS</dt><dd>Use 20 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_40_MS</dt><dd>Use 40 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_60_MS</dt><dd>Use 60 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_80_MS</dt><dd>Use 80 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_100_MS</dt><dd>Use 100 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_120_MS</dt><dd>Use 120 ms frames.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_SET_EXPERT_FRAME_DURATION(x) OPUS_SET_EXPERT_FRAME_DURATION_REQUEST, __opus_check_int(x)\n/** Gets the encoder's configured use of variable duration frames.\n  * @see OPUS_SET_EXPERT_FRAME_DURATION\n  * @param[out] x <tt>opus_int32 *</tt>: Returns one of the following values:\n  * <dl>\n  * <dt>OPUS_FRAMESIZE_ARG</dt><dd>Select frame size from the argument (default).</dd>\n  * <dt>OPUS_FRAMESIZE_2_5_MS</dt><dd>Use 2.5 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_5_MS</dt><dd>Use 5 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_10_MS</dt><dd>Use 10 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_20_MS</dt><dd>Use 20 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_40_MS</dt><dd>Use 40 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_60_MS</dt><dd>Use 60 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_80_MS</dt><dd>Use 80 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_100_MS</dt><dd>Use 100 ms frames.</dd>\n  * <dt>OPUS_FRAMESIZE_120_MS</dt><dd>Use 120 ms frames.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_GET_EXPERT_FRAME_DURATION(x) OPUS_GET_EXPERT_FRAME_DURATION_REQUEST, __opus_check_int_ptr(x)\n\n/** If set to 1, disables almost all use of prediction, making frames almost\n  * completely independent. This reduces quality.\n  * @see OPUS_GET_PREDICTION_DISABLED\n  * @param[in] x <tt>opus_int32</tt>: Allowed values:\n  * <dl>\n  * <dt>0</dt><dd>Enable prediction (default).</dd>\n  * <dt>1</dt><dd>Disable prediction.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_SET_PREDICTION_DISABLED(x) OPUS_SET_PREDICTION_DISABLED_REQUEST, __opus_check_int(x)\n/** Gets the encoder's configured prediction status.\n  * @see OPUS_SET_PREDICTION_DISABLED\n  * @param[out] x <tt>opus_int32 *</tt>: Returns one of the following values:\n  * <dl>\n  * <dt>0</dt><dd>Prediction enabled (default).</dd>\n  * <dt>1</dt><dd>Prediction disabled.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_GET_PREDICTION_DISABLED(x) OPUS_GET_PREDICTION_DISABLED_REQUEST, __opus_check_int_ptr(x)\n\n/**@}*/\n\n/** @defgroup opus_genericctls Generic CTLs\n  *\n  * These macros are used with the \\c opus_decoder_ctl and\n  * \\c opus_encoder_ctl calls to generate a particular\n  * request.\n  *\n  * When called on an \\c OpusDecoder they apply to that\n  * particular decoder instance. When called on an\n  * \\c OpusEncoder they apply to the corresponding setting\n  * on that encoder instance, if present.\n  *\n  * Some usage examples:\n  *\n  * @code\n  * int ret;\n  * opus_int32 pitch;\n  * ret = opus_decoder_ctl(dec_ctx, OPUS_GET_PITCH(&pitch));\n  * if (ret == OPUS_OK) return ret;\n  *\n  * opus_encoder_ctl(enc_ctx, OPUS_RESET_STATE);\n  * opus_decoder_ctl(dec_ctx, OPUS_RESET_STATE);\n  *\n  * opus_int32 enc_bw, dec_bw;\n  * opus_encoder_ctl(enc_ctx, OPUS_GET_BANDWIDTH(&enc_bw));\n  * opus_decoder_ctl(dec_ctx, OPUS_GET_BANDWIDTH(&dec_bw));\n  * if (enc_bw != dec_bw) {\n  *   printf(\"packet bandwidth mismatch!\\n\");\n  * }\n  * @endcode\n  *\n  * @see opus_encoder, opus_decoder_ctl, opus_encoder_ctl, opus_decoderctls, opus_encoderctls\n  * @{\n  */\n\n/** Resets the codec state to be equivalent to a freshly initialized state.\n  * This should be called when switching streams in order to prevent\n  * the back to back decoding from giving different results from\n  * one at a time decoding.\n  * @hideinitializer */\n#define OPUS_RESET_STATE 4028\n\n/** Gets the final state of the codec's entropy coder.\n  * This is used for testing purposes,\n  * The encoder and decoder state should be identical after coding a payload\n  * (assuming no data corruption or software bugs)\n  *\n  * @param[out] x <tt>opus_uint32 *</tt>: Entropy coder state\n  *\n  * @hideinitializer */\n#define OPUS_GET_FINAL_RANGE(x) OPUS_GET_FINAL_RANGE_REQUEST, __opus_check_uint_ptr(x)\n\n/** Gets the encoder's configured bandpass or the decoder's last bandpass.\n  * @see OPUS_SET_BANDWIDTH\n  * @param[out] x <tt>opus_int32 *</tt>: Returns one of the following values:\n  * <dl>\n  * <dt>#OPUS_AUTO</dt>                    <dd>(default)</dd>\n  * <dt>#OPUS_BANDWIDTH_NARROWBAND</dt>    <dd>4 kHz passband</dd>\n  * <dt>#OPUS_BANDWIDTH_MEDIUMBAND</dt>    <dd>6 kHz passband</dd>\n  * <dt>#OPUS_BANDWIDTH_WIDEBAND</dt>      <dd>8 kHz passband</dd>\n  * <dt>#OPUS_BANDWIDTH_SUPERWIDEBAND</dt><dd>12 kHz passband</dd>\n  * <dt>#OPUS_BANDWIDTH_FULLBAND</dt>     <dd>20 kHz passband</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_GET_BANDWIDTH(x) OPUS_GET_BANDWIDTH_REQUEST, __opus_check_int_ptr(x)\n\n/** Gets the sampling rate the encoder or decoder was initialized with.\n  * This simply returns the <code>Fs</code> value passed to opus_encoder_init()\n  * or opus_decoder_init().\n  * @param[out] x <tt>opus_int32 *</tt>: Sampling rate of encoder or decoder.\n  * @hideinitializer\n  */\n#define OPUS_GET_SAMPLE_RATE(x) OPUS_GET_SAMPLE_RATE_REQUEST, __opus_check_int_ptr(x)\n\n/** If set to 1, disables the use of phase inversion for intensity stereo,\n  * improving the quality of mono downmixes, but slightly reducing normal\n  * stereo quality. Disabling phase inversion in the decoder does not comply\n  * with RFC 6716, although it does not cause any interoperability issue and\n  * is expected to become part of the Opus standard once RFC 6716 is updated\n  * by draft-ietf-codec-opus-update.\n  * @see OPUS_GET_PHASE_INVERSION_DISABLED\n  * @param[in] x <tt>opus_int32</tt>: Allowed values:\n  * <dl>\n  * <dt>0</dt><dd>Enable phase inversion (default).</dd>\n  * <dt>1</dt><dd>Disable phase inversion.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_SET_PHASE_INVERSION_DISABLED(x) OPUS_SET_PHASE_INVERSION_DISABLED_REQUEST, __opus_check_int(x)\n/** Gets the encoder's configured phase inversion status.\n  * @see OPUS_SET_PHASE_INVERSION_DISABLED\n  * @param[out] x <tt>opus_int32 *</tt>: Returns one of the following values:\n  * <dl>\n  * <dt>0</dt><dd>Stereo phase inversion enabled (default).</dd>\n  * <dt>1</dt><dd>Stereo phase inversion disabled.</dd>\n  * </dl>\n  * @hideinitializer */\n#define OPUS_GET_PHASE_INVERSION_DISABLED(x) OPUS_GET_PHASE_INVERSION_DISABLED_REQUEST, __opus_check_int_ptr(x)\n\n/**@}*/\n\n/** @defgroup opus_decoderctls Decoder related CTLs\n  * @see opus_genericctls, opus_encoderctls, opus_decoder\n  * @{\n  */\n\n/** Configures decoder gain adjustment.\n  * Scales the decoded output by a factor specified in Q8 dB units.\n  * This has a maximum range of -32768 to 32767 inclusive, and returns\n  * OPUS_BAD_ARG otherwise. The default is zero indicating no adjustment.\n  * This setting survives decoder reset.\n  *\n  * gain = pow(10, x/(20.0*256))\n  *\n  * @param[in] x <tt>opus_int32</tt>:   Amount to scale PCM signal by in Q8 dB units.\n  * @hideinitializer */\n#define OPUS_SET_GAIN(x) OPUS_SET_GAIN_REQUEST, __opus_check_int(x)\n/** Gets the decoder's configured gain adjustment. @see OPUS_SET_GAIN\n  *\n  * @param[out] x <tt>opus_int32 *</tt>: Amount to scale PCM signal by in Q8 dB units.\n  * @hideinitializer */\n#define OPUS_GET_GAIN(x) OPUS_GET_GAIN_REQUEST, __opus_check_int_ptr(x)\n\n/** Gets the duration (in samples) of the last packet successfully decoded or concealed.\n  * @param[out] x <tt>opus_int32 *</tt>: Number of samples (at current sampling rate).\n  * @hideinitializer */\n#define OPUS_GET_LAST_PACKET_DURATION(x) OPUS_GET_LAST_PACKET_DURATION_REQUEST, __opus_check_int_ptr(x)\n\n/** Gets the pitch of the last decoded frame, if available.\n  * This can be used for any post-processing algorithm requiring the use of pitch,\n  * e.g. time stretching/shortening. If the last frame was not voiced, or if the\n  * pitch was not coded in the frame, then zero is returned.\n  *\n  * This CTL is only implemented for decoder instances.\n  *\n  * @param[out] x <tt>opus_int32 *</tt>: pitch period at 48 kHz (or 0 if not available)\n  *\n  * @hideinitializer */\n#define OPUS_GET_PITCH(x) OPUS_GET_PITCH_REQUEST, __opus_check_int_ptr(x)\n\n/**@}*/\n\n/** @defgroup opus_libinfo Opus library information functions\n  * @{\n  */\n\n/** Converts an opus error code into a human readable string.\n  *\n  * @param[in] error <tt>int</tt>: Error number\n  * @returns Error string\n  */\nOPUS_EXPORT const char *opus_strerror(int error);\n\n/** Gets the libopus version string.\n  *\n  * Applications may look for the substring \"-fixed\" in the version string to\n  * determine whether they have a fixed-point or floating-point build at\n  * runtime.\n  *\n  * @returns Version string\n  */\nOPUS_EXPORT const char *opus_get_version_string(void);\n/**@}*/\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif /* OPUS_DEFINES_H */\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/include/opus_multistream.h",
    "content": "/* Copyright (c) 2011 Xiph.Org Foundation\n   Written by Jean-Marc Valin */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n/**\n * @file opus_multistream.h\n * @brief Opus reference implementation multistream API\n */\n\n#ifndef OPUS_MULTISTREAM_H\n#define OPUS_MULTISTREAM_H\n\n#include \"opus.h\"\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/** @cond OPUS_INTERNAL_DOC */\n\n/** Macros to trigger compilation errors when the wrong types are provided to a\n  * CTL. */\n/**@{*/\n#define __opus_check_encstate_ptr(ptr) ((ptr) + ((ptr) - (OpusEncoder**)(ptr)))\n#define __opus_check_decstate_ptr(ptr) ((ptr) + ((ptr) - (OpusDecoder**)(ptr)))\n/**@}*/\n\n/** These are the actual encoder and decoder CTL ID numbers.\n  * They should not be used directly by applications.\n  * In general, SETs should be even and GETs should be odd.*/\n/**@{*/\n#define OPUS_MULTISTREAM_GET_ENCODER_STATE_REQUEST 5120\n#define OPUS_MULTISTREAM_GET_DECODER_STATE_REQUEST 5122\n/**@}*/\n\n/** @endcond */\n\n/** @defgroup opus_multistream_ctls Multistream specific encoder and decoder CTLs\n  *\n  * These are convenience macros that are specific to the\n  * opus_multistream_encoder_ctl() and opus_multistream_decoder_ctl()\n  * interface.\n  * The CTLs from @ref opus_genericctls, @ref opus_encoderctls, and\n  * @ref opus_decoderctls may be applied to a multistream encoder or decoder as\n  * well.\n  * In addition, you may retrieve the encoder or decoder state for an specific\n  * stream via #OPUS_MULTISTREAM_GET_ENCODER_STATE or\n  * #OPUS_MULTISTREAM_GET_DECODER_STATE and apply CTLs to it individually.\n  */\n/**@{*/\n\n/** Gets the encoder state for an individual stream of a multistream encoder.\n  * @param[in] x <tt>opus_int32</tt>: The index of the stream whose encoder you\n  *                                   wish to retrieve.\n  *                                   This must be non-negative and less than\n  *                                   the <code>streams</code> parameter used\n  *                                   to initialize the encoder.\n  * @param[out] y <tt>OpusEncoder**</tt>: Returns a pointer to the given\n  *                                       encoder state.\n  * @retval OPUS_BAD_ARG The index of the requested stream was out of range.\n  * @hideinitializer\n  */\n#define OPUS_MULTISTREAM_GET_ENCODER_STATE(x,y) OPUS_MULTISTREAM_GET_ENCODER_STATE_REQUEST, __opus_check_int(x), __opus_check_encstate_ptr(y)\n\n/** Gets the decoder state for an individual stream of a multistream decoder.\n  * @param[in] x <tt>opus_int32</tt>: The index of the stream whose decoder you\n  *                                   wish to retrieve.\n  *                                   This must be non-negative and less than\n  *                                   the <code>streams</code> parameter used\n  *                                   to initialize the decoder.\n  * @param[out] y <tt>OpusDecoder**</tt>: Returns a pointer to the given\n  *                                       decoder state.\n  * @retval OPUS_BAD_ARG The index of the requested stream was out of range.\n  * @hideinitializer\n  */\n#define OPUS_MULTISTREAM_GET_DECODER_STATE(x,y) OPUS_MULTISTREAM_GET_DECODER_STATE_REQUEST, __opus_check_int(x), __opus_check_decstate_ptr(y)\n\n/**@}*/\n\n/** @defgroup opus_multistream Opus Multistream API\n  * @{\n  *\n  * The multistream API allows individual Opus streams to be combined into a\n  * single packet, enabling support for up to 255 channels. Unlike an\n  * elementary Opus stream, the encoder and decoder must negotiate the channel\n  * configuration before the decoder can successfully interpret the data in the\n  * packets produced by the encoder. Some basic information, such as packet\n  * duration, can be computed without any special negotiation.\n  *\n  * The format for multistream Opus packets is defined in\n  * <a href=\"https://tools.ietf.org/html/rfc7845\">RFC 7845</a>\n  * and is based on the self-delimited Opus framing described in Appendix B of\n  * <a href=\"https://tools.ietf.org/html/rfc6716\">RFC 6716</a>.\n  * Normal Opus packets are just a degenerate case of multistream Opus packets,\n  * and can be encoded or decoded with the multistream API by setting\n  * <code>streams</code> to <code>1</code> when initializing the encoder or\n  * decoder.\n  *\n  * Multistream Opus streams can contain up to 255 elementary Opus streams.\n  * These may be either \"uncoupled\" or \"coupled\", indicating that the decoder\n  * is configured to decode them to either 1 or 2 channels, respectively.\n  * The streams are ordered so that all coupled streams appear at the\n  * beginning.\n  *\n  * A <code>mapping</code> table defines which decoded channel <code>i</code>\n  * should be used for each input/output (I/O) channel <code>j</code>. This table is\n  * typically provided as an unsigned char array.\n  * Let <code>i = mapping[j]</code> be the index for I/O channel <code>j</code>.\n  * If <code>i < 2*coupled_streams</code>, then I/O channel <code>j</code> is\n  * encoded as the left channel of stream <code>(i/2)</code> if <code>i</code>\n  * is even, or  as the right channel of stream <code>(i/2)</code> if\n  * <code>i</code> is odd. Otherwise, I/O channel <code>j</code> is encoded as\n  * mono in stream <code>(i - coupled_streams)</code>, unless it has the special\n  * value 255, in which case it is omitted from the encoding entirely (the\n  * decoder will reproduce it as silence). Each value <code>i</code> must either\n  * be the special value 255 or be less than <code>streams + coupled_streams</code>.\n  *\n  * The output channels specified by the encoder\n  * should use the\n  * <a href=\"https://www.xiph.org/vorbis/doc/Vorbis_I_spec.html#x1-810004.3.9\">Vorbis\n  * channel ordering</a>. A decoder may wish to apply an additional permutation\n  * to the mapping the encoder used to achieve a different output channel\n  * order (e.g. for outputing in WAV order).\n  *\n  * Each multistream packet contains an Opus packet for each stream, and all of\n  * the Opus packets in a single multistream packet must have the same\n  * duration. Therefore the duration of a multistream packet can be extracted\n  * from the TOC sequence of the first stream, which is located at the\n  * beginning of the packet, just like an elementary Opus stream:\n  *\n  * @code\n  * int nb_samples;\n  * int nb_frames;\n  * nb_frames = opus_packet_get_nb_frames(data, len);\n  * if (nb_frames < 1)\n  *   return nb_frames;\n  * nb_samples = opus_packet_get_samples_per_frame(data, 48000) * nb_frames;\n  * @endcode\n  *\n  * The general encoding and decoding process proceeds exactly the same as in\n  * the normal @ref opus_encoder and @ref opus_decoder APIs.\n  * See their documentation for an overview of how to use the corresponding\n  * multistream functions.\n  */\n\n/** Opus multistream encoder state.\n  * This contains the complete state of a multistream Opus encoder.\n  * It is position independent and can be freely copied.\n  * @see opus_multistream_encoder_create\n  * @see opus_multistream_encoder_init\n  */\ntypedef struct OpusMSEncoder OpusMSEncoder;\n\n/** Opus multistream decoder state.\n  * This contains the complete state of a multistream Opus decoder.\n  * It is position independent and can be freely copied.\n  * @see opus_multistream_decoder_create\n  * @see opus_multistream_decoder_init\n  */\ntypedef struct OpusMSDecoder OpusMSDecoder;\n\n/**\\name Multistream encoder functions */\n/**@{*/\n\n/** Gets the size of an OpusMSEncoder structure.\n  * @param streams <tt>int</tt>: The total number of streams to encode from the\n  *                              input.\n  *                              This must be no more than 255.\n  * @param coupled_streams <tt>int</tt>: Number of coupled (2 channel) streams\n  *                                      to encode.\n  *                                      This must be no larger than the total\n  *                                      number of streams.\n  *                                      Additionally, The total number of\n  *                                      encoded channels (<code>streams +\n  *                                      coupled_streams</code>) must be no\n  *                                      more than 255.\n  * @returns The size in bytes on success, or a negative error code\n  *          (see @ref opus_errorcodes) on error.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT opus_int32 opus_multistream_encoder_get_size(\n      int streams,\n      int coupled_streams\n);\n\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT opus_int32 opus_multistream_surround_encoder_get_size(\n      int channels,\n      int mapping_family\n);\n\n\n/** Allocates and initializes a multistream encoder state.\n  * Call opus_multistream_encoder_destroy() to release\n  * this object when finished.\n  * @param Fs <tt>opus_int32</tt>: Sampling rate of the input signal (in Hz).\n  *                                This must be one of 8000, 12000, 16000,\n  *                                24000, or 48000.\n  * @param channels <tt>int</tt>: Number of channels in the input signal.\n  *                               This must be at most 255.\n  *                               It may be greater than the number of\n  *                               coded channels (<code>streams +\n  *                               coupled_streams</code>).\n  * @param streams <tt>int</tt>: The total number of streams to encode from the\n  *                              input.\n  *                              This must be no more than the number of channels.\n  * @param coupled_streams <tt>int</tt>: Number of coupled (2 channel) streams\n  *                                      to encode.\n  *                                      This must be no larger than the total\n  *                                      number of streams.\n  *                                      Additionally, The total number of\n  *                                      encoded channels (<code>streams +\n  *                                      coupled_streams</code>) must be no\n  *                                      more than the number of input channels.\n  * @param[in] mapping <code>const unsigned char[channels]</code>: Mapping from\n  *                    encoded channels to input channels, as described in\n  *                    @ref opus_multistream. As an extra constraint, the\n  *                    multistream encoder does not allow encoding coupled\n  *                    streams for which one channel is unused since this\n  *                    is never a good idea.\n  * @param application <tt>int</tt>: The target encoder application.\n  *                                  This must be one of the following:\n  * <dl>\n  * <dt>#OPUS_APPLICATION_VOIP</dt>\n  * <dd>Process signal for improved speech intelligibility.</dd>\n  * <dt>#OPUS_APPLICATION_AUDIO</dt>\n  * <dd>Favor faithfulness to the original input.</dd>\n  * <dt>#OPUS_APPLICATION_RESTRICTED_LOWDELAY</dt>\n  * <dd>Configure the minimum possible coding delay by disabling certain modes\n  * of operation.</dd>\n  * </dl>\n  * @param[out] error <tt>int *</tt>: Returns #OPUS_OK on success, or an error\n  *                                   code (see @ref opus_errorcodes) on\n  *                                   failure.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT OpusMSEncoder *opus_multistream_encoder_create(\n      opus_int32 Fs,\n      int channels,\n      int streams,\n      int coupled_streams,\n      const unsigned char *mapping,\n      int application,\n      int *error\n) OPUS_ARG_NONNULL(5);\n\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT OpusMSEncoder *opus_multistream_surround_encoder_create(\n      opus_int32 Fs,\n      int channels,\n      int mapping_family,\n      int *streams,\n      int *coupled_streams,\n      unsigned char *mapping,\n      int application,\n      int *error\n) OPUS_ARG_NONNULL(4) OPUS_ARG_NONNULL(5) OPUS_ARG_NONNULL(6);\n\n/** Initialize a previously allocated multistream encoder state.\n  * The memory pointed to by \\a st must be at least the size returned by\n  * opus_multistream_encoder_get_size().\n  * This is intended for applications which use their own allocator instead of\n  * malloc.\n  * To reset a previously initialized state, use the #OPUS_RESET_STATE CTL.\n  * @see opus_multistream_encoder_create\n  * @see opus_multistream_encoder_get_size\n  * @param st <tt>OpusMSEncoder*</tt>: Multistream encoder state to initialize.\n  * @param Fs <tt>opus_int32</tt>: Sampling rate of the input signal (in Hz).\n  *                                This must be one of 8000, 12000, 16000,\n  *                                24000, or 48000.\n  * @param channels <tt>int</tt>: Number of channels in the input signal.\n  *                               This must be at most 255.\n  *                               It may be greater than the number of\n  *                               coded channels (<code>streams +\n  *                               coupled_streams</code>).\n  * @param streams <tt>int</tt>: The total number of streams to encode from the\n  *                              input.\n  *                              This must be no more than the number of channels.\n  * @param coupled_streams <tt>int</tt>: Number of coupled (2 channel) streams\n  *                                      to encode.\n  *                                      This must be no larger than the total\n  *                                      number of streams.\n  *                                      Additionally, The total number of\n  *                                      encoded channels (<code>streams +\n  *                                      coupled_streams</code>) must be no\n  *                                      more than the number of input channels.\n  * @param[in] mapping <code>const unsigned char[channels]</code>: Mapping from\n  *                    encoded channels to input channels, as described in\n  *                    @ref opus_multistream. As an extra constraint, the\n  *                    multistream encoder does not allow encoding coupled\n  *                    streams for which one channel is unused since this\n  *                    is never a good idea.\n  * @param application <tt>int</tt>: The target encoder application.\n  *                                  This must be one of the following:\n  * <dl>\n  * <dt>#OPUS_APPLICATION_VOIP</dt>\n  * <dd>Process signal for improved speech intelligibility.</dd>\n  * <dt>#OPUS_APPLICATION_AUDIO</dt>\n  * <dd>Favor faithfulness to the original input.</dd>\n  * <dt>#OPUS_APPLICATION_RESTRICTED_LOWDELAY</dt>\n  * <dd>Configure the minimum possible coding delay by disabling certain modes\n  * of operation.</dd>\n  * </dl>\n  * @returns #OPUS_OK on success, or an error code (see @ref opus_errorcodes)\n  *          on failure.\n  */\nOPUS_EXPORT int opus_multistream_encoder_init(\n      OpusMSEncoder *st,\n      opus_int32 Fs,\n      int channels,\n      int streams,\n      int coupled_streams,\n      const unsigned char *mapping,\n      int application\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(6);\n\nOPUS_EXPORT int opus_multistream_surround_encoder_init(\n      OpusMSEncoder *st,\n      opus_int32 Fs,\n      int channels,\n      int mapping_family,\n      int *streams,\n      int *coupled_streams,\n      unsigned char *mapping,\n      int application\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(5) OPUS_ARG_NONNULL(6) OPUS_ARG_NONNULL(7);\n\n/** Encodes a multistream Opus frame.\n  * @param st <tt>OpusMSEncoder*</tt>: Multistream encoder state.\n  * @param[in] pcm <tt>const opus_int16*</tt>: The input signal as interleaved\n  *                                            samples.\n  *                                            This must contain\n  *                                            <code>frame_size*channels</code>\n  *                                            samples.\n  * @param frame_size <tt>int</tt>: Number of samples per channel in the input\n  *                                 signal.\n  *                                 This must be an Opus frame size for the\n  *                                 encoder's sampling rate.\n  *                                 For example, at 48 kHz the permitted values\n  *                                 are 120, 240, 480, 960, 1920, and 2880.\n  *                                 Passing in a duration of less than 10 ms\n  *                                 (480 samples at 48 kHz) will prevent the\n  *                                 encoder from using the LPC or hybrid modes.\n  * @param[out] data <tt>unsigned char*</tt>: Output payload.\n  *                                           This must contain storage for at\n  *                                           least \\a max_data_bytes.\n  * @param [in] max_data_bytes <tt>opus_int32</tt>: Size of the allocated\n  *                                                 memory for the output\n  *                                                 payload. This may be\n  *                                                 used to impose an upper limit on\n  *                                                 the instant bitrate, but should\n  *                                                 not be used as the only bitrate\n  *                                                 control. Use #OPUS_SET_BITRATE to\n  *                                                 control the bitrate.\n  * @returns The length of the encoded packet (in bytes) on success or a\n  *          negative error code (see @ref opus_errorcodes) on failure.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_multistream_encode(\n    OpusMSEncoder *st,\n    const opus_int16 *pcm,\n    int frame_size,\n    unsigned char *data,\n    opus_int32 max_data_bytes\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(2) OPUS_ARG_NONNULL(4);\n\n/** Encodes a multistream Opus frame from floating point input.\n  * @param st <tt>OpusMSEncoder*</tt>: Multistream encoder state.\n  * @param[in] pcm <tt>const float*</tt>: The input signal as interleaved\n  *                                       samples with a normal range of\n  *                                       +/-1.0.\n  *                                       Samples with a range beyond +/-1.0\n  *                                       are supported but will be clipped by\n  *                                       decoders using the integer API and\n  *                                       should only be used if it is known\n  *                                       that the far end supports extended\n  *                                       dynamic range.\n  *                                       This must contain\n  *                                       <code>frame_size*channels</code>\n  *                                       samples.\n  * @param frame_size <tt>int</tt>: Number of samples per channel in the input\n  *                                 signal.\n  *                                 This must be an Opus frame size for the\n  *                                 encoder's sampling rate.\n  *                                 For example, at 48 kHz the permitted values\n  *                                 are 120, 240, 480, 960, 1920, and 2880.\n  *                                 Passing in a duration of less than 10 ms\n  *                                 (480 samples at 48 kHz) will prevent the\n  *                                 encoder from using the LPC or hybrid modes.\n  * @param[out] data <tt>unsigned char*</tt>: Output payload.\n  *                                           This must contain storage for at\n  *                                           least \\a max_data_bytes.\n  * @param [in] max_data_bytes <tt>opus_int32</tt>: Size of the allocated\n  *                                                 memory for the output\n  *                                                 payload. This may be\n  *                                                 used to impose an upper limit on\n  *                                                 the instant bitrate, but should\n  *                                                 not be used as the only bitrate\n  *                                                 control. Use #OPUS_SET_BITRATE to\n  *                                                 control the bitrate.\n  * @returns The length of the encoded packet (in bytes) on success or a\n  *          negative error code (see @ref opus_errorcodes) on failure.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_multistream_encode_float(\n      OpusMSEncoder *st,\n      const float *pcm,\n      int frame_size,\n      unsigned char *data,\n      opus_int32 max_data_bytes\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(2) OPUS_ARG_NONNULL(4);\n\n/** Frees an <code>OpusMSEncoder</code> allocated by\n  * opus_multistream_encoder_create().\n  * @param st <tt>OpusMSEncoder*</tt>: Multistream encoder state to be freed.\n  */\nOPUS_EXPORT void opus_multistream_encoder_destroy(OpusMSEncoder *st);\n\n/** Perform a CTL function on a multistream Opus encoder.\n  *\n  * Generally the request and subsequent arguments are generated by a\n  * convenience macro.\n  * @param st <tt>OpusMSEncoder*</tt>: Multistream encoder state.\n  * @param request This and all remaining parameters should be replaced by one\n  *                of the convenience macros in @ref opus_genericctls,\n  *                @ref opus_encoderctls, or @ref opus_multistream_ctls.\n  * @see opus_genericctls\n  * @see opus_encoderctls\n  * @see opus_multistream_ctls\n  */\nOPUS_EXPORT int opus_multistream_encoder_ctl(OpusMSEncoder *st, int request, ...) OPUS_ARG_NONNULL(1);\n\n/**@}*/\n\n/**\\name Multistream decoder functions */\n/**@{*/\n\n/** Gets the size of an <code>OpusMSDecoder</code> structure.\n  * @param streams <tt>int</tt>: The total number of streams coded in the\n  *                              input.\n  *                              This must be no more than 255.\n  * @param coupled_streams <tt>int</tt>: Number streams to decode as coupled\n  *                                      (2 channel) streams.\n  *                                      This must be no larger than the total\n  *                                      number of streams.\n  *                                      Additionally, The total number of\n  *                                      coded channels (<code>streams +\n  *                                      coupled_streams</code>) must be no\n  *                                      more than 255.\n  * @returns The size in bytes on success, or a negative error code\n  *          (see @ref opus_errorcodes) on error.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT opus_int32 opus_multistream_decoder_get_size(\n      int streams,\n      int coupled_streams\n);\n\n/** Allocates and initializes a multistream decoder state.\n  * Call opus_multistream_decoder_destroy() to release\n  * this object when finished.\n  * @param Fs <tt>opus_int32</tt>: Sampling rate to decode at (in Hz).\n  *                                This must be one of 8000, 12000, 16000,\n  *                                24000, or 48000.\n  * @param channels <tt>int</tt>: Number of channels to output.\n  *                               This must be at most 255.\n  *                               It may be different from the number of coded\n  *                               channels (<code>streams +\n  *                               coupled_streams</code>).\n  * @param streams <tt>int</tt>: The total number of streams coded in the\n  *                              input.\n  *                              This must be no more than 255.\n  * @param coupled_streams <tt>int</tt>: Number of streams to decode as coupled\n  *                                      (2 channel) streams.\n  *                                      This must be no larger than the total\n  *                                      number of streams.\n  *                                      Additionally, The total number of\n  *                                      coded channels (<code>streams +\n  *                                      coupled_streams</code>) must be no\n  *                                      more than 255.\n  * @param[in] mapping <code>const unsigned char[channels]</code>: Mapping from\n  *                    coded channels to output channels, as described in\n  *                    @ref opus_multistream.\n  * @param[out] error <tt>int *</tt>: Returns #OPUS_OK on success, or an error\n  *                                   code (see @ref opus_errorcodes) on\n  *                                   failure.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT OpusMSDecoder *opus_multistream_decoder_create(\n      opus_int32 Fs,\n      int channels,\n      int streams,\n      int coupled_streams,\n      const unsigned char *mapping,\n      int *error\n) OPUS_ARG_NONNULL(5);\n\n/** Intialize a previously allocated decoder state object.\n  * The memory pointed to by \\a st must be at least the size returned by\n  * opus_multistream_encoder_get_size().\n  * This is intended for applications which use their own allocator instead of\n  * malloc.\n  * To reset a previously initialized state, use the #OPUS_RESET_STATE CTL.\n  * @see opus_multistream_decoder_create\n  * @see opus_multistream_deocder_get_size\n  * @param st <tt>OpusMSEncoder*</tt>: Multistream encoder state to initialize.\n  * @param Fs <tt>opus_int32</tt>: Sampling rate to decode at (in Hz).\n  *                                This must be one of 8000, 12000, 16000,\n  *                                24000, or 48000.\n  * @param channels <tt>int</tt>: Number of channels to output.\n  *                               This must be at most 255.\n  *                               It may be different from the number of coded\n  *                               channels (<code>streams +\n  *                               coupled_streams</code>).\n  * @param streams <tt>int</tt>: The total number of streams coded in the\n  *                              input.\n  *                              This must be no more than 255.\n  * @param coupled_streams <tt>int</tt>: Number of streams to decode as coupled\n  *                                      (2 channel) streams.\n  *                                      This must be no larger than the total\n  *                                      number of streams.\n  *                                      Additionally, The total number of\n  *                                      coded channels (<code>streams +\n  *                                      coupled_streams</code>) must be no\n  *                                      more than 255.\n  * @param[in] mapping <code>const unsigned char[channels]</code>: Mapping from\n  *                    coded channels to output channels, as described in\n  *                    @ref opus_multistream.\n  * @returns #OPUS_OK on success, or an error code (see @ref opus_errorcodes)\n  *          on failure.\n  */\nOPUS_EXPORT int opus_multistream_decoder_init(\n      OpusMSDecoder *st,\n      opus_int32 Fs,\n      int channels,\n      int streams,\n      int coupled_streams,\n      const unsigned char *mapping\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(6);\n\n/** Decode a multistream Opus packet.\n  * @param st <tt>OpusMSDecoder*</tt>: Multistream decoder state.\n  * @param[in] data <tt>const unsigned char*</tt>: Input payload.\n  *                                                Use a <code>NULL</code>\n  *                                                pointer to indicate packet\n  *                                                loss.\n  * @param len <tt>opus_int32</tt>: Number of bytes in payload.\n  * @param[out] pcm <tt>opus_int16*</tt>: Output signal, with interleaved\n  *                                       samples.\n  *                                       This must contain room for\n  *                                       <code>frame_size*channels</code>\n  *                                       samples.\n  * @param frame_size <tt>int</tt>: The number of samples per channel of\n  *                                 available space in \\a pcm.\n  *                                 If this is less than the maximum packet duration\n  *                                 (120 ms; 5760 for 48kHz), this function will not be capable\n  *                                 of decoding some packets. In the case of PLC (data==NULL)\n  *                                 or FEC (decode_fec=1), then frame_size needs to be exactly\n  *                                 the duration of audio that is missing, otherwise the\n  *                                 decoder will not be in the optimal state to decode the\n  *                                 next incoming packet. For the PLC and FEC cases, frame_size\n  *                                 <b>must</b> be a multiple of 2.5 ms.\n  * @param decode_fec <tt>int</tt>: Flag (0 or 1) to request that any in-band\n  *                                 forward error correction data be decoded.\n  *                                 If no such data is available, the frame is\n  *                                 decoded as if it were lost.\n  * @returns Number of samples decoded on success or a negative error code\n  *          (see @ref opus_errorcodes) on failure.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_multistream_decode(\n    OpusMSDecoder *st,\n    const unsigned char *data,\n    opus_int32 len,\n    opus_int16 *pcm,\n    int frame_size,\n    int decode_fec\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(4);\n\n/** Decode a multistream Opus packet with floating point output.\n  * @param st <tt>OpusMSDecoder*</tt>: Multistream decoder state.\n  * @param[in] data <tt>const unsigned char*</tt>: Input payload.\n  *                                                Use a <code>NULL</code>\n  *                                                pointer to indicate packet\n  *                                                loss.\n  * @param len <tt>opus_int32</tt>: Number of bytes in payload.\n  * @param[out] pcm <tt>opus_int16*</tt>: Output signal, with interleaved\n  *                                       samples.\n  *                                       This must contain room for\n  *                                       <code>frame_size*channels</code>\n  *                                       samples.\n  * @param frame_size <tt>int</tt>: The number of samples per channel of\n  *                                 available space in \\a pcm.\n  *                                 If this is less than the maximum packet duration\n  *                                 (120 ms; 5760 for 48kHz), this function will not be capable\n  *                                 of decoding some packets. In the case of PLC (data==NULL)\n  *                                 or FEC (decode_fec=1), then frame_size needs to be exactly\n  *                                 the duration of audio that is missing, otherwise the\n  *                                 decoder will not be in the optimal state to decode the\n  *                                 next incoming packet. For the PLC and FEC cases, frame_size\n  *                                 <b>must</b> be a multiple of 2.5 ms.\n  * @param decode_fec <tt>int</tt>: Flag (0 or 1) to request that any in-band\n  *                                 forward error correction data be decoded.\n  *                                 If no such data is available, the frame is\n  *                                 decoded as if it were lost.\n  * @returns Number of samples decoded on success or a negative error code\n  *          (see @ref opus_errorcodes) on failure.\n  */\nOPUS_EXPORT OPUS_WARN_UNUSED_RESULT int opus_multistream_decode_float(\n    OpusMSDecoder *st,\n    const unsigned char *data,\n    opus_int32 len,\n    float *pcm,\n    int frame_size,\n    int decode_fec\n) OPUS_ARG_NONNULL(1) OPUS_ARG_NONNULL(4);\n\n/** Perform a CTL function on a multistream Opus decoder.\n  *\n  * Generally the request and subsequent arguments are generated by a\n  * convenience macro.\n  * @param st <tt>OpusMSDecoder*</tt>: Multistream decoder state.\n  * @param request This and all remaining parameters should be replaced by one\n  *                of the convenience macros in @ref opus_genericctls,\n  *                @ref opus_decoderctls, or @ref opus_multistream_ctls.\n  * @see opus_genericctls\n  * @see opus_decoderctls\n  * @see opus_multistream_ctls\n  */\nOPUS_EXPORT int opus_multistream_decoder_ctl(OpusMSDecoder *st, int request, ...) OPUS_ARG_NONNULL(1);\n\n/** Frees an <code>OpusMSDecoder</code> allocated by\n  * opus_multistream_decoder_create().\n  * @param st <tt>OpusMSDecoder</tt>: Multistream decoder state to be freed.\n  */\nOPUS_EXPORT void opus_multistream_decoder_destroy(OpusMSDecoder *st);\n\n/**@}*/\n\n/**@}*/\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif /* OPUS_MULTISTREAM_H */\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/include/opus_types.h",
    "content": "/* (C) COPYRIGHT 1994-2002 Xiph.Org Foundation */\n/* Modified by Jean-Marc Valin */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n/* opus_types.h based on ogg_types.h from libogg */\n\n/**\n   @file opus_types.h\n   @brief Opus reference implementation types\n*/\n#ifndef OPUS_TYPES_H\n#define OPUS_TYPES_H\n\n/* Use the real stdint.h if it's there (taken from Paul Hsieh's pstdint.h) */\n#if (defined(__STDC__) && __STDC__ && defined(__STDC_VERSION__) && __STDC_VERSION__ >= 199901L) || (defined(__GNUC__) && (defined(_STDINT_H) || defined(_STDINT_H_)) || defined (HAVE_STDINT_H))\n#include <stdint.h>\n\n   typedef int16_t opus_int16;\n   typedef uint16_t opus_uint16;\n   typedef int32_t opus_int32;\n   typedef uint32_t opus_uint32;\n#elif defined(_WIN32)\n\n#  if defined(__CYGWIN__)\n#    include <_G_config.h>\n     typedef _G_int32_t opus_int32;\n     typedef _G_uint32_t opus_uint32;\n     typedef _G_int16 opus_int16;\n     typedef _G_uint16 opus_uint16;\n#  elif defined(__MINGW32__)\n     typedef short opus_int16;\n     typedef unsigned short opus_uint16;\n     typedef int opus_int32;\n     typedef unsigned int opus_uint32;\n#  elif defined(__MWERKS__)\n     typedef int opus_int32;\n     typedef unsigned int opus_uint32;\n     typedef short opus_int16;\n     typedef unsigned short opus_uint16;\n#  else\n     /* MSVC/Borland */\n     typedef __int32 opus_int32;\n     typedef unsigned __int32 opus_uint32;\n     typedef __int16 opus_int16;\n     typedef unsigned __int16 opus_uint16;\n#  endif\n\n#elif defined(__MACOS__)\n\n#  include <sys/types.h>\n   typedef SInt16 opus_int16;\n   typedef UInt16 opus_uint16;\n   typedef SInt32 opus_int32;\n   typedef UInt32 opus_uint32;\n\n#elif (defined(__APPLE__) && defined(__MACH__)) /* MacOS X Framework build */\n\n#  include <sys/types.h>\n   typedef int16_t opus_int16;\n   typedef u_int16_t opus_uint16;\n   typedef int32_t opus_int32;\n   typedef u_int32_t opus_uint32;\n\n#elif defined(__BEOS__)\n\n   /* Be */\n#  include <inttypes.h>\n   typedef int16 opus_int16;\n   typedef u_int16 opus_uint16;\n   typedef int32_t opus_int32;\n   typedef u_int32_t opus_uint32;\n\n#elif defined (__EMX__)\n\n   /* OS/2 GCC */\n   typedef short opus_int16;\n   typedef unsigned short opus_uint16;\n   typedef int opus_int32;\n   typedef unsigned int opus_uint32;\n\n#elif defined (DJGPP)\n\n   /* DJGPP */\n   typedef short opus_int16;\n   typedef unsigned short opus_uint16;\n   typedef int opus_int32;\n   typedef unsigned int opus_uint32;\n\n#elif defined(R5900)\n\n   /* PS2 EE */\n   typedef int opus_int32;\n   typedef unsigned opus_uint32;\n   typedef short opus_int16;\n   typedef unsigned short opus_uint16;\n\n#elif defined(__SYMBIAN32__)\n\n   /* Symbian GCC */\n   typedef signed short opus_int16;\n   typedef unsigned short opus_uint16;\n   typedef signed int opus_int32;\n   typedef unsigned int opus_uint32;\n\n#elif defined(CONFIG_TI_C54X) || defined (CONFIG_TI_C55X)\n\n   typedef short opus_int16;\n   typedef unsigned short opus_uint16;\n   typedef long opus_int32;\n   typedef unsigned long opus_uint32;\n\n#elif defined(CONFIG_TI_C6X)\n\n   typedef short opus_int16;\n   typedef unsigned short opus_uint16;\n   typedef int opus_int32;\n   typedef unsigned int opus_uint32;\n\n#else\n\n   /* Give up, take a reasonable guess */\n   typedef short opus_int16;\n   typedef unsigned short opus_uint16;\n   typedef int opus_int32;\n   typedef unsigned int opus_uint32;\n\n#endif\n\n#define opus_int         int                     /* used for counters etc; at least 16 bits */\n#define opus_int64       long long\n#define opus_int8        signed char\n\n#define opus_uint        unsigned int            /* used for counters etc; at least 16 bits */\n#define opus_uint64      unsigned long long\n#define opus_uint8       unsigned char\n\n#endif  /* OPUS_TYPES_H */\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/analysis.c",
    "content": "/* Copyright (c) 2011 Xiph.Org Foundation\n   Written by Jean-Marc Valin */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR\n   CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#define ANALYSIS_C\n\n#include <stdio.h>\n\n#include \"mathops.h\"\n#include \"kiss_fft.h\"\n#include \"celt.h\"\n#include \"modes.h\"\n#include \"arch.h\"\n#include \"quant_bands.h\"\n#include \"analysis.h\"\n#include \"mlp.h\"\n#include \"stack_alloc.h\"\n#include \"float_cast.h\"\n\n#ifndef M_PI\n#define M_PI 3.141592653\n#endif\n\n#ifndef DISABLE_FLOAT_API\n\nstatic const float dct_table[128] = {\n        0.250000f, 0.250000f, 0.250000f, 0.250000f, 0.250000f, 0.250000f, 0.250000f, 0.250000f,\n        0.250000f, 0.250000f, 0.250000f, 0.250000f, 0.250000f, 0.250000f, 0.250000f, 0.250000f,\n        0.351851f, 0.338330f, 0.311806f, 0.273300f, 0.224292f, 0.166664f, 0.102631f, 0.034654f,\n       -0.034654f,-0.102631f,-0.166664f,-0.224292f,-0.273300f,-0.311806f,-0.338330f,-0.351851f,\n        0.346760f, 0.293969f, 0.196424f, 0.068975f,-0.068975f,-0.196424f,-0.293969f,-0.346760f,\n       -0.346760f,-0.293969f,-0.196424f,-0.068975f, 0.068975f, 0.196424f, 0.293969f, 0.346760f,\n        0.338330f, 0.224292f, 0.034654f,-0.166664f,-0.311806f,-0.351851f,-0.273300f,-0.102631f,\n        0.102631f, 0.273300f, 0.351851f, 0.311806f, 0.166664f,-0.034654f,-0.224292f,-0.338330f,\n        0.326641f, 0.135299f,-0.135299f,-0.326641f,-0.326641f,-0.135299f, 0.135299f, 0.326641f,\n        0.326641f, 0.135299f,-0.135299f,-0.326641f,-0.326641f,-0.135299f, 0.135299f, 0.326641f,\n        0.311806f, 0.034654f,-0.273300f,-0.338330f,-0.102631f, 0.224292f, 0.351851f, 0.166664f,\n       -0.166664f,-0.351851f,-0.224292f, 0.102631f, 0.338330f, 0.273300f,-0.034654f,-0.311806f,\n        0.293969f,-0.068975f,-0.346760f,-0.196424f, 0.196424f, 0.346760f, 0.068975f,-0.293969f,\n       -0.293969f, 0.068975f, 0.346760f, 0.196424f,-0.196424f,-0.346760f,-0.068975f, 0.293969f,\n        0.273300f,-0.166664f,-0.338330f, 0.034654f, 0.351851f, 0.102631f,-0.311806f,-0.224292f,\n        0.224292f, 0.311806f,-0.102631f,-0.351851f,-0.034654f, 0.338330f, 0.166664f,-0.273300f,\n};\n\nstatic const float analysis_window[240] = {\n      0.000043f, 0.000171f, 0.000385f, 0.000685f, 0.001071f, 0.001541f, 0.002098f, 0.002739f,\n      0.003466f, 0.004278f, 0.005174f, 0.006156f, 0.007222f, 0.008373f, 0.009607f, 0.010926f,\n      0.012329f, 0.013815f, 0.015385f, 0.017037f, 0.018772f, 0.020590f, 0.022490f, 0.024472f,\n      0.026535f, 0.028679f, 0.030904f, 0.033210f, 0.035595f, 0.038060f, 0.040604f, 0.043227f,\n      0.045928f, 0.048707f, 0.051564f, 0.054497f, 0.057506f, 0.060591f, 0.063752f, 0.066987f,\n      0.070297f, 0.073680f, 0.077136f, 0.080665f, 0.084265f, 0.087937f, 0.091679f, 0.095492f,\n      0.099373f, 0.103323f, 0.107342f, 0.111427f, 0.115579f, 0.119797f, 0.124080f, 0.128428f,\n      0.132839f, 0.137313f, 0.141849f, 0.146447f, 0.151105f, 0.155823f, 0.160600f, 0.165435f,\n      0.170327f, 0.175276f, 0.180280f, 0.185340f, 0.190453f, 0.195619f, 0.200838f, 0.206107f,\n      0.211427f, 0.216797f, 0.222215f, 0.227680f, 0.233193f, 0.238751f, 0.244353f, 0.250000f,\n      0.255689f, 0.261421f, 0.267193f, 0.273005f, 0.278856f, 0.284744f, 0.290670f, 0.296632f,\n      0.302628f, 0.308658f, 0.314721f, 0.320816f, 0.326941f, 0.333097f, 0.339280f, 0.345492f,\n      0.351729f, 0.357992f, 0.364280f, 0.370590f, 0.376923f, 0.383277f, 0.389651f, 0.396044f,\n      0.402455f, 0.408882f, 0.415325f, 0.421783f, 0.428254f, 0.434737f, 0.441231f, 0.447736f,\n      0.454249f, 0.460770f, 0.467298f, 0.473832f, 0.480370f, 0.486912f, 0.493455f, 0.500000f,\n      0.506545f, 0.513088f, 0.519630f, 0.526168f, 0.532702f, 0.539230f, 0.545751f, 0.552264f,\n      0.558769f, 0.565263f, 0.571746f, 0.578217f, 0.584675f, 0.591118f, 0.597545f, 0.603956f,\n      0.610349f, 0.616723f, 0.623077f, 0.629410f, 0.635720f, 0.642008f, 0.648271f, 0.654508f,\n      0.660720f, 0.666903f, 0.673059f, 0.679184f, 0.685279f, 0.691342f, 0.697372f, 0.703368f,\n      0.709330f, 0.715256f, 0.721144f, 0.726995f, 0.732807f, 0.738579f, 0.744311f, 0.750000f,\n      0.755647f, 0.761249f, 0.766807f, 0.772320f, 0.777785f, 0.783203f, 0.788573f, 0.793893f,\n      0.799162f, 0.804381f, 0.809547f, 0.814660f, 0.819720f, 0.824724f, 0.829673f, 0.834565f,\n      0.839400f, 0.844177f, 0.848895f, 0.853553f, 0.858151f, 0.862687f, 0.867161f, 0.871572f,\n      0.875920f, 0.880203f, 0.884421f, 0.888573f, 0.892658f, 0.896677f, 0.900627f, 0.904508f,\n      0.908321f, 0.912063f, 0.915735f, 0.919335f, 0.922864f, 0.926320f, 0.929703f, 0.933013f,\n      0.936248f, 0.939409f, 0.942494f, 0.945503f, 0.948436f, 0.951293f, 0.954072f, 0.956773f,\n      0.959396f, 0.961940f, 0.964405f, 0.966790f, 0.969096f, 0.971321f, 0.973465f, 0.975528f,\n      0.977510f, 0.979410f, 0.981228f, 0.982963f, 0.984615f, 0.986185f, 0.987671f, 0.989074f,\n      0.990393f, 0.991627f, 0.992778f, 0.993844f, 0.994826f, 0.995722f, 0.996534f, 0.997261f,\n      0.997902f, 0.998459f, 0.998929f, 0.999315f, 0.999615f, 0.999829f, 0.999957f, 1.000000f,\n};\n\nstatic const int tbands[NB_TBANDS+1] = {\n      4, 8, 12, 16, 20, 24, 28, 32, 40, 48, 56, 64, 80, 96, 112, 136, 160, 192, 240\n};\n\n#define NB_TONAL_SKIP_BANDS 9\n\nstatic opus_val32 silk_resampler_down2_hp(\n    opus_val32                  *S,                 /* I/O  State vector [ 2 ]                                          */\n    opus_val32                  *out,               /* O    Output signal [ floor(len/2) ]                              */\n    const opus_val32            *in,                /* I    Input signal [ len ]                                        */\n    int                         inLen               /* I    Number of input samples                                     */\n)\n{\n    int k, len2 = inLen/2;\n    opus_val32 in32, out32, out32_hp, Y, X;\n    opus_val64 hp_ener = 0;\n    /* Internal variables and state are in Q10 format */\n    for( k = 0; k < len2; k++ ) {\n        /* Convert to Q10 */\n        in32 = in[ 2 * k ];\n\n        /* All-pass section for even input sample */\n        Y      = SUB32( in32, S[ 0 ] );\n        X      = MULT16_32_Q15(QCONST16(0.6074371f, 15), Y);\n        out32  = ADD32( S[ 0 ], X );\n        S[ 0 ] = ADD32( in32, X );\n        out32_hp = out32;\n        /* Convert to Q10 */\n        in32 = in[ 2 * k + 1 ];\n\n        /* All-pass section for odd input sample, and add to output of previous section */\n        Y      = SUB32( in32, S[ 1 ] );\n        X      = MULT16_32_Q15(QCONST16(0.15063f, 15), Y);\n        out32  = ADD32( out32, S[ 1 ] );\n        out32  = ADD32( out32, X );\n        S[ 1 ] = ADD32( in32, X );\n\n        Y      = SUB32( -in32, S[ 2 ] );\n        X      = MULT16_32_Q15(QCONST16(0.15063f, 15), Y);\n        out32_hp  = ADD32( out32_hp, S[ 2 ] );\n        out32_hp  = ADD32( out32_hp, X );\n        S[ 2 ] = ADD32( -in32, X );\n\n        hp_ener += out32_hp*(opus_val64)out32_hp;\n        /* Add, convert back to int16 and store to output */\n        out[ k ] = HALF32(out32);\n    }\n#ifdef FIXED_POINT\n    /* len2 can be up to 480, so we shift by 8 more to make it fit. */\n    hp_ener = hp_ener >> (2*SIG_SHIFT + 8);\n#endif\n    return (opus_val32)hp_ener;\n}\n\nstatic opus_val32 downmix_and_resample(downmix_func downmix, const void *_x, opus_val32 *y, opus_val32 S[3], int subframe, int offset, int c1, int c2, int C, int Fs)\n{\n   VARDECL(opus_val32, tmp);\n   opus_val32 scale;\n   int j;\n   opus_val32 ret = 0;\n   SAVE_STACK;\n\n   if (subframe==0) return 0;\n   if (Fs == 48000)\n   {\n      subframe *= 2;\n      offset *= 2;\n   } else if (Fs == 16000) {\n      subframe = subframe*2/3;\n      offset = offset*2/3;\n   }\n   ALLOC(tmp, subframe, opus_val32);\n\n   downmix(_x, tmp, subframe, offset, c1, c2, C);\n#ifdef FIXED_POINT\n   scale = (1<<SIG_SHIFT);\n#else\n   scale = 1.f/32768;\n#endif\n   if (c2==-2)\n      scale /= C;\n   else if (c2>-1)\n      scale /= 2;\n   for (j=0;j<subframe;j++)\n      tmp[j] *= scale;\n   if (Fs == 48000)\n   {\n      ret = silk_resampler_down2_hp(S, y, tmp, subframe);\n   } else if (Fs == 24000) {\n      OPUS_COPY(y, tmp, subframe);\n   } else if (Fs == 16000) {\n      VARDECL(opus_val32, tmp3x);\n      ALLOC(tmp3x, 3*subframe, opus_val32);\n      /* Don't do this at home! This resampler is horrible and it's only (barely)\n         usable for the purpose of the analysis because we don't care about all\n         the aliasing between 8 kHz and 12 kHz. */\n      for (j=0;j<subframe;j++)\n      {\n         tmp3x[3*j] = tmp[j];\n         tmp3x[3*j+1] = tmp[j];\n         tmp3x[3*j+2] = tmp[j];\n      }\n      silk_resampler_down2_hp(S, y, tmp3x, 3*subframe);\n   }\n   RESTORE_STACK;\n   return ret;\n}\n\nvoid tonality_analysis_init(TonalityAnalysisState *tonal, opus_int32 Fs)\n{\n  /* Initialize reusable fields. */\n  tonal->arch = opus_select_arch();\n  tonal->Fs = Fs;\n  /* Clear remaining fields. */\n  tonality_analysis_reset(tonal);\n}\n\nvoid tonality_analysis_reset(TonalityAnalysisState *tonal)\n{\n  /* Clear non-reusable fields. */\n  char *start = (char*)&tonal->TONALITY_ANALYSIS_RESET_START;\n  OPUS_CLEAR(start, sizeof(TonalityAnalysisState) - (start - (char*)tonal));\n  tonal->music_confidence = .9f;\n  tonal->speech_confidence = .1f;\n}\n\nvoid tonality_get_info(TonalityAnalysisState *tonal, AnalysisInfo *info_out, int len)\n{\n   int pos;\n   int curr_lookahead;\n   float psum;\n   float tonality_max;\n   float tonality_avg;\n   int tonality_count;\n   int i;\n\n   pos = tonal->read_pos;\n   curr_lookahead = tonal->write_pos-tonal->read_pos;\n   if (curr_lookahead<0)\n      curr_lookahead += DETECT_SIZE;\n\n   /* On long frames, look at the second analysis window rather than the first. */\n   if (len > tonal->Fs/50 && pos != tonal->write_pos)\n   {\n      pos++;\n      if (pos==DETECT_SIZE)\n         pos=0;\n   }\n   if (pos == tonal->write_pos)\n      pos--;\n   if (pos<0)\n      pos = DETECT_SIZE-1;\n   OPUS_COPY(info_out, &tonal->info[pos], 1);\n   tonality_max = tonality_avg = info_out->tonality;\n   tonality_count = 1;\n   /* If possible, look ahead for a tone to compensate for the delay in the tone detector. */\n   for (i=0;i<3;i++)\n   {\n      pos++;\n      if (pos==DETECT_SIZE)\n         pos = 0;\n      if (pos == tonal->write_pos)\n         break;\n      tonality_max = MAX32(tonality_max, tonal->info[pos].tonality);\n      tonality_avg += tonal->info[pos].tonality;\n      tonality_count++;\n   }\n   info_out->tonality = MAX32(tonality_avg/tonality_count, tonality_max-.2f);\n   tonal->read_subframe += len/(tonal->Fs/400);\n   while (tonal->read_subframe>=8)\n   {\n      tonal->read_subframe -= 8;\n      tonal->read_pos++;\n   }\n   if (tonal->read_pos>=DETECT_SIZE)\n      tonal->read_pos-=DETECT_SIZE;\n\n   /* The -1 is to compensate for the delay in the features themselves. */\n   curr_lookahead = IMAX(curr_lookahead-1, 0);\n\n   psum=0;\n   /* Summing the probability of transition patterns that involve music at\n      time (DETECT_SIZE-curr_lookahead-1) */\n   for (i=0;i<DETECT_SIZE-curr_lookahead;i++)\n      psum += tonal->pmusic[i];\n   for (;i<DETECT_SIZE;i++)\n      psum += tonal->pspeech[i];\n   psum = psum*tonal->music_confidence + (1-psum)*tonal->speech_confidence;\n   /*printf(\"%f %f %f %f %f\\n\", psum, info_out->music_prob, info_out->vad_prob, info_out->activity_probability, info_out->tonality);*/\n\n   info_out->music_prob = psum;\n}\n\nstatic const float std_feature_bias[9] = {\n      5.684947f, 3.475288f, 1.770634f, 1.599784f, 3.773215f,\n      2.163313f, 1.260756f, 1.116868f, 1.918795f\n};\n\n#define LEAKAGE_OFFSET 2.5f\n#define LEAKAGE_SLOPE 2.f\n\n#ifdef FIXED_POINT\n/* For fixed-point, the input is +/-2^15 shifted up by SIG_SHIFT, so we need to\n   compensate for that in the energy. */\n#define SCALE_COMPENS (1.f/((opus_int32)1<<(15+SIG_SHIFT)))\n#define SCALE_ENER(e) ((SCALE_COMPENS*SCALE_COMPENS)*(e))\n#else\n#define SCALE_ENER(e) (e)\n#endif\n\nstatic void tonality_analysis(TonalityAnalysisState *tonal, const CELTMode *celt_mode, const void *x, int len, int offset, int c1, int c2, int C, int lsb_depth, downmix_func downmix)\n{\n    int i, b;\n    const kiss_fft_state *kfft;\n    VARDECL(kiss_fft_cpx, in);\n    VARDECL(kiss_fft_cpx, out);\n    int N = 480, N2=240;\n    float * OPUS_RESTRICT A = tonal->angle;\n    float * OPUS_RESTRICT dA = tonal->d_angle;\n    float * OPUS_RESTRICT d2A = tonal->d2_angle;\n    VARDECL(float, tonality);\n    VARDECL(float, noisiness);\n    float band_tonality[NB_TBANDS];\n    float logE[NB_TBANDS];\n    float BFCC[8];\n    float features[25];\n    float frame_tonality;\n    float max_frame_tonality;\n    /*float tw_sum=0;*/\n    float frame_noisiness;\n    const float pi4 = (float)(M_PI*M_PI*M_PI*M_PI);\n    float slope=0;\n    float frame_stationarity;\n    float relativeE;\n    float frame_probs[2];\n    float alpha, alphaE, alphaE2;\n    float frame_loudness;\n    float bandwidth_mask;\n    int bandwidth=0;\n    float maxE = 0;\n    float noise_floor;\n    int remaining;\n    AnalysisInfo *info;\n    float hp_ener;\n    float tonality2[240];\n    float midE[8];\n    float spec_variability=0;\n    float band_log2[NB_TBANDS+1];\n    float leakage_from[NB_TBANDS+1];\n    float leakage_to[NB_TBANDS+1];\n    SAVE_STACK;\n\n    alpha = 1.f/IMIN(10, 1+tonal->count);\n    alphaE = 1.f/IMIN(25, 1+tonal->count);\n    alphaE2 = 1.f/IMIN(500, 1+tonal->count);\n\n    if (tonal->Fs == 48000)\n    {\n       /* len and offset are now at 24 kHz. */\n       len/= 2;\n       offset /= 2;\n    } else if (tonal->Fs == 16000) {\n       len = 3*len/2;\n       offset = 3*offset/2;\n    }\n\n    if (tonal->count<4) {\n       if (tonal->application == OPUS_APPLICATION_VOIP)\n          tonal->music_prob = .1f;\n       else\n          tonal->music_prob = .625f;\n    }\n    kfft = celt_mode->mdct.kfft[0];\n    if (tonal->count==0)\n       tonal->mem_fill = 240;\n    tonal->hp_ener_accum += (float)downmix_and_resample(downmix, x,\n          &tonal->inmem[tonal->mem_fill], tonal->downmix_state,\n          IMIN(len, ANALYSIS_BUF_SIZE-tonal->mem_fill), offset, c1, c2, C, tonal->Fs);\n    if (tonal->mem_fill+len < ANALYSIS_BUF_SIZE)\n    {\n       tonal->mem_fill += len;\n       /* Don't have enough to update the analysis */\n       RESTORE_STACK;\n       return;\n    }\n    hp_ener = tonal->hp_ener_accum;\n    info = &tonal->info[tonal->write_pos++];\n    if (tonal->write_pos>=DETECT_SIZE)\n       tonal->write_pos-=DETECT_SIZE;\n\n    ALLOC(in, 480, kiss_fft_cpx);\n    ALLOC(out, 480, kiss_fft_cpx);\n    ALLOC(tonality, 240, float);\n    ALLOC(noisiness, 240, float);\n    for (i=0;i<N2;i++)\n    {\n       float w = analysis_window[i];\n       in[i].r = (kiss_fft_scalar)(w*tonal->inmem[i]);\n       in[i].i = (kiss_fft_scalar)(w*tonal->inmem[N2+i]);\n       in[N-i-1].r = (kiss_fft_scalar)(w*tonal->inmem[N-i-1]);\n       in[N-i-1].i = (kiss_fft_scalar)(w*tonal->inmem[N+N2-i-1]);\n    }\n    OPUS_MOVE(tonal->inmem, tonal->inmem+ANALYSIS_BUF_SIZE-240, 240);\n    remaining = len - (ANALYSIS_BUF_SIZE-tonal->mem_fill);\n    tonal->hp_ener_accum = (float)downmix_and_resample(downmix, x,\n          &tonal->inmem[240], tonal->downmix_state, remaining,\n          offset+ANALYSIS_BUF_SIZE-tonal->mem_fill, c1, c2, C, tonal->Fs);\n    tonal->mem_fill = 240 + remaining;\n    opus_fft(kfft, in, out, tonal->arch);\n#ifndef FIXED_POINT\n    /* If there's any NaN on the input, the entire output will be NaN, so we only need to check one value. */\n    if (celt_isnan(out[0].r))\n    {\n       info->valid = 0;\n       RESTORE_STACK;\n       return;\n    }\n#endif\n\n    for (i=1;i<N2;i++)\n    {\n       float X1r, X2r, X1i, X2i;\n       float angle, d_angle, d2_angle;\n       float angle2, d_angle2, d2_angle2;\n       float mod1, mod2, avg_mod;\n       X1r = (float)out[i].r+out[N-i].r;\n       X1i = (float)out[i].i-out[N-i].i;\n       X2r = (float)out[i].i+out[N-i].i;\n       X2i = (float)out[N-i].r-out[i].r;\n\n       angle = (float)(.5f/M_PI)*fast_atan2f(X1i, X1r);\n       d_angle = angle - A[i];\n       d2_angle = d_angle - dA[i];\n\n       angle2 = (float)(.5f/M_PI)*fast_atan2f(X2i, X2r);\n       d_angle2 = angle2 - angle;\n       d2_angle2 = d_angle2 - d_angle;\n\n       mod1 = d2_angle - (float)float2int(d2_angle);\n       noisiness[i] = ABS16(mod1);\n       mod1 *= mod1;\n       mod1 *= mod1;\n\n       mod2 = d2_angle2 - (float)float2int(d2_angle2);\n       noisiness[i] += ABS16(mod2);\n       mod2 *= mod2;\n       mod2 *= mod2;\n\n       avg_mod = .25f*(d2A[i]+mod1+2*mod2);\n       /* This introduces an extra delay of 2 frames in the detection. */\n       tonality[i] = 1.f/(1.f+40.f*16.f*pi4*avg_mod)-.015f;\n       /* No delay on this detection, but it's less reliable. */\n       tonality2[i] = 1.f/(1.f+40.f*16.f*pi4*mod2)-.015f;\n\n       A[i] = angle2;\n       dA[i] = d_angle2;\n       d2A[i] = mod2;\n    }\n    for (i=2;i<N2-1;i++)\n    {\n       float tt = MIN32(tonality2[i], MAX32(tonality2[i-1], tonality2[i+1]));\n       tonality[i] = .9f*MAX32(tonality[i], tt-.1f);\n    }\n    frame_tonality = 0;\n    max_frame_tonality = 0;\n    /*tw_sum = 0;*/\n    info->activity = 0;\n    frame_noisiness = 0;\n    frame_stationarity = 0;\n    if (!tonal->count)\n    {\n       for (b=0;b<NB_TBANDS;b++)\n       {\n          tonal->lowE[b] = 1e10;\n          tonal->highE[b] = -1e10;\n       }\n    }\n    relativeE = 0;\n    frame_loudness = 0;\n    /* The energy of the very first band is special because of DC. */\n    {\n       float E = 0;\n       float X1r, X2r;\n       X1r = 2*(float)out[0].r;\n       X2r = 2*(float)out[0].i;\n       E = X1r*X1r + X2r*X2r;\n       for (i=1;i<4;i++)\n       {\n          float binE = out[i].r*(float)out[i].r + out[N-i].r*(float)out[N-i].r\n                     + out[i].i*(float)out[i].i + out[N-i].i*(float)out[N-i].i;\n          E += binE;\n       }\n       E = SCALE_ENER(E);\n       band_log2[0] = .5f*1.442695f*(float)log(E+1e-10f);\n    }\n    for (b=0;b<NB_TBANDS;b++)\n    {\n       float E=0, tE=0, nE=0;\n       float L1, L2;\n       float stationarity;\n       for (i=tbands[b];i<tbands[b+1];i++)\n       {\n          float binE = out[i].r*(float)out[i].r + out[N-i].r*(float)out[N-i].r\n                     + out[i].i*(float)out[i].i + out[N-i].i*(float)out[N-i].i;\n          binE = SCALE_ENER(binE);\n          E += binE;\n          tE += binE*MAX32(0, tonality[i]);\n          nE += binE*2.f*(.5f-noisiness[i]);\n       }\n#ifndef FIXED_POINT\n       /* Check for extreme band energies that could cause NaNs later. */\n       if (!(E<1e9f) || celt_isnan(E))\n       {\n          info->valid = 0;\n          RESTORE_STACK;\n          return;\n       }\n#endif\n\n       tonal->E[tonal->E_count][b] = E;\n       frame_noisiness += nE/(1e-15f+E);\n\n       frame_loudness += (float)sqrt(E+1e-10f);\n       logE[b] = (float)log(E+1e-10f);\n       band_log2[b+1] = .5f*1.442695f*(float)log(E+1e-10f);\n       tonal->logE[tonal->E_count][b] = logE[b];\n       if (tonal->count==0)\n          tonal->highE[b] = tonal->lowE[b] = logE[b];\n       if (tonal->highE[b] > tonal->lowE[b] + 7.5)\n       {\n          if (tonal->highE[b] - logE[b] > logE[b] - tonal->lowE[b])\n             tonal->highE[b] -= .01f;\n          else\n             tonal->lowE[b] += .01f;\n       }\n       if (logE[b] > tonal->highE[b])\n       {\n          tonal->highE[b] = logE[b];\n          tonal->lowE[b] = MAX32(tonal->highE[b]-15, tonal->lowE[b]);\n       } else if (logE[b] < tonal->lowE[b])\n       {\n          tonal->lowE[b] = logE[b];\n          tonal->highE[b] = MIN32(tonal->lowE[b]+15, tonal->highE[b]);\n       }\n       relativeE += (logE[b]-tonal->lowE[b])/(1e-15f + (tonal->highE[b]-tonal->lowE[b]));\n\n       L1=L2=0;\n       for (i=0;i<NB_FRAMES;i++)\n       {\n          L1 += (float)sqrt(tonal->E[i][b]);\n          L2 += tonal->E[i][b];\n       }\n\n       stationarity = MIN16(0.99f,L1/(float)sqrt(1e-15+NB_FRAMES*L2));\n       stationarity *= stationarity;\n       stationarity *= stationarity;\n       frame_stationarity += stationarity;\n       /*band_tonality[b] = tE/(1e-15+E)*/;\n       band_tonality[b] = MAX16(tE/(1e-15f+E), stationarity*tonal->prev_band_tonality[b]);\n#if 0\n       if (b>=NB_TONAL_SKIP_BANDS)\n       {\n          frame_tonality += tweight[b]*band_tonality[b];\n          tw_sum += tweight[b];\n       }\n#else\n       frame_tonality += band_tonality[b];\n       if (b>=NB_TBANDS-NB_TONAL_SKIP_BANDS)\n          frame_tonality -= band_tonality[b-NB_TBANDS+NB_TONAL_SKIP_BANDS];\n#endif\n       max_frame_tonality = MAX16(max_frame_tonality, (1.f+.03f*(b-NB_TBANDS))*frame_tonality);\n       slope += band_tonality[b]*(b-8);\n       /*printf(\"%f %f \", band_tonality[b], stationarity);*/\n       tonal->prev_band_tonality[b] = band_tonality[b];\n    }\n\n    leakage_from[0] = band_log2[0];\n    leakage_to[0] = band_log2[0] - LEAKAGE_OFFSET;\n    for (b=1;b<NB_TBANDS+1;b++)\n    {\n       float leak_slope = LEAKAGE_SLOPE*(tbands[b]-tbands[b-1])/4;\n       leakage_from[b] = MIN16(leakage_from[b-1]+leak_slope, band_log2[b]);\n       leakage_to[b] = MAX16(leakage_to[b-1]-leak_slope, band_log2[b]-LEAKAGE_OFFSET);\n    }\n    for (b=NB_TBANDS-2;b>=0;b--)\n    {\n       float leak_slope = LEAKAGE_SLOPE*(tbands[b+1]-tbands[b])/4;\n       leakage_from[b] = MIN16(leakage_from[b+1]+leak_slope, leakage_from[b]);\n       leakage_to[b] = MAX16(leakage_to[b+1]-leak_slope, leakage_to[b]);\n    }\n    celt_assert(NB_TBANDS+1 <= LEAK_BANDS);\n    for (b=0;b<NB_TBANDS+1;b++)\n    {\n       /* leak_boost[] is made up of two terms. The first, based on leakage_to[],\n          represents the boost needed to overcome the amount of analysis leakage\n          cause in a weaker band b by louder neighbouring bands.\n          The second, based on leakage_from[], applies to a loud band b for\n          which the quantization noise causes synthesis leakage to the weaker\n          neighbouring bands. */\n       float boost = MAX16(0, leakage_to[b] - band_log2[b]) +\n             MAX16(0, band_log2[b] - (leakage_from[b]+LEAKAGE_OFFSET));\n       info->leak_boost[b] = IMIN(255, (int)floor(.5 + 64.f*boost));\n    }\n    for (;b<LEAK_BANDS;b++) info->leak_boost[b] = 0;\n\n    for (i=0;i<NB_FRAMES;i++)\n    {\n       int j;\n       float mindist = 1e15f;\n       for (j=0;j<NB_FRAMES;j++)\n       {\n          int k;\n          float dist=0;\n          for (k=0;k<NB_TBANDS;k++)\n          {\n             float tmp;\n             tmp = tonal->logE[i][k] - tonal->logE[j][k];\n             dist += tmp*tmp;\n          }\n          if (j!=i)\n             mindist = MIN32(mindist, dist);\n       }\n       spec_variability += mindist;\n    }\n    spec_variability = (float)sqrt(spec_variability/NB_FRAMES/NB_TBANDS);\n    bandwidth_mask = 0;\n    bandwidth = 0;\n    maxE = 0;\n    noise_floor = 5.7e-4f/(1<<(IMAX(0,lsb_depth-8)));\n    noise_floor *= noise_floor;\n    for (b=0;b<NB_TBANDS;b++)\n    {\n       float E=0;\n       int band_start, band_end;\n       /* Keep a margin of 300 Hz for aliasing */\n       band_start = tbands[b];\n       band_end = tbands[b+1];\n       for (i=band_start;i<band_end;i++)\n       {\n          float binE = out[i].r*(float)out[i].r + out[N-i].r*(float)out[N-i].r\n                     + out[i].i*(float)out[i].i + out[N-i].i*(float)out[N-i].i;\n          E += binE;\n       }\n       E = SCALE_ENER(E);\n       maxE = MAX32(maxE, E);\n       tonal->meanE[b] = MAX32((1-alphaE2)*tonal->meanE[b], E);\n       E = MAX32(E, tonal->meanE[b]);\n       /* Use a simple follower with 13 dB/Bark slope for spreading function */\n       bandwidth_mask = MAX32(.05f*bandwidth_mask, E);\n       /* Consider the band \"active\" only if all these conditions are met:\n          1) less than 10 dB below the simple follower\n          2) less than 90 dB below the peak band (maximal masking possible considering\n             both the ATH and the loudness-dependent slope of the spreading function)\n          3) above the PCM quantization noise floor\n          We use b+1 because the first CELT band isn't included in tbands[]\n       */\n       if (E>.1*bandwidth_mask && E*1e9f > maxE && E > noise_floor*(band_end-band_start))\n          bandwidth = b+1;\n    }\n    /* Special case for the last two bands, for which we don't have spectrum but only\n       the energy above 12 kHz. */\n    if (tonal->Fs == 48000) {\n       float ratio;\n       float E = hp_ener*(1.f/(240*240));\n       ratio = tonal->prev_bandwidth==20 ? 0.03f : 0.07f;\n#ifdef FIXED_POINT\n       /* silk_resampler_down2_hp() shifted right by an extra 8 bits. */\n       E *= 256.f*(1.f/Q15ONE)*(1.f/Q15ONE);\n#endif\n       maxE = MAX32(maxE, E);\n       tonal->meanE[b] = MAX32((1-alphaE2)*tonal->meanE[b], E);\n       E = MAX32(E, tonal->meanE[b]);\n       /* Use a simple follower with 13 dB/Bark slope for spreading function */\n       bandwidth_mask = MAX32(.05f*bandwidth_mask, E);\n       if (E>ratio*bandwidth_mask && E*1e9f > maxE && E > noise_floor*160)\n          bandwidth = 20;\n       /* This detector is unreliable, so if the bandwidth is close to SWB, assume it's FB. */\n       if (bandwidth >= 17)\n          bandwidth = 20;\n    }\n    if (tonal->count<=2)\n       bandwidth = 20;\n    frame_loudness = 20*(float)log10(frame_loudness);\n    tonal->Etracker = MAX32(tonal->Etracker-.003f, frame_loudness);\n    tonal->lowECount *= (1-alphaE);\n    if (frame_loudness < tonal->Etracker-30)\n       tonal->lowECount += alphaE;\n\n    for (i=0;i<8;i++)\n    {\n       float sum=0;\n       for (b=0;b<16;b++)\n          sum += dct_table[i*16+b]*logE[b];\n       BFCC[i] = sum;\n    }\n    for (i=0;i<8;i++)\n    {\n       float sum=0;\n       for (b=0;b<16;b++)\n          sum += dct_table[i*16+b]*.5f*(tonal->highE[b]+tonal->lowE[b]);\n       midE[i] = sum;\n    }\n\n    frame_stationarity /= NB_TBANDS;\n    relativeE /= NB_TBANDS;\n    if (tonal->count<10)\n       relativeE = .5f;\n    frame_noisiness /= NB_TBANDS;\n#if 1\n    info->activity = frame_noisiness + (1-frame_noisiness)*relativeE;\n#else\n    info->activity = .5*(1+frame_noisiness-frame_stationarity);\n#endif\n    frame_tonality = (max_frame_tonality/(NB_TBANDS-NB_TONAL_SKIP_BANDS));\n    frame_tonality = MAX16(frame_tonality, tonal->prev_tonality*.8f);\n    tonal->prev_tonality = frame_tonality;\n\n    slope /= 8*8;\n    info->tonality_slope = slope;\n\n    tonal->E_count = (tonal->E_count+1)%NB_FRAMES;\n    tonal->count = IMIN(tonal->count+1, ANALYSIS_COUNT_MAX);\n    info->tonality = frame_tonality;\n\n    for (i=0;i<4;i++)\n       features[i] = -0.12299f*(BFCC[i]+tonal->mem[i+24]) + 0.49195f*(tonal->mem[i]+tonal->mem[i+16]) + 0.69693f*tonal->mem[i+8] - 1.4349f*tonal->cmean[i];\n\n    for (i=0;i<4;i++)\n       tonal->cmean[i] = (1-alpha)*tonal->cmean[i] + alpha*BFCC[i];\n\n    for (i=0;i<4;i++)\n        features[4+i] = 0.63246f*(BFCC[i]-tonal->mem[i+24]) + 0.31623f*(tonal->mem[i]-tonal->mem[i+16]);\n    for (i=0;i<3;i++)\n        features[8+i] = 0.53452f*(BFCC[i]+tonal->mem[i+24]) - 0.26726f*(tonal->mem[i]+tonal->mem[i+16]) -0.53452f*tonal->mem[i+8];\n\n    if (tonal->count > 5)\n    {\n       for (i=0;i<9;i++)\n          tonal->std[i] = (1-alpha)*tonal->std[i] + alpha*features[i]*features[i];\n    }\n    for (i=0;i<4;i++)\n       features[i] = BFCC[i]-midE[i];\n\n    for (i=0;i<8;i++)\n    {\n       tonal->mem[i+24] = tonal->mem[i+16];\n       tonal->mem[i+16] = tonal->mem[i+8];\n       tonal->mem[i+8] = tonal->mem[i];\n       tonal->mem[i] = BFCC[i];\n    }\n    for (i=0;i<9;i++)\n       features[11+i] = (float)sqrt(tonal->std[i]) - std_feature_bias[i];\n    features[18] = spec_variability - 0.78f;\n    features[20] = info->tonality - 0.154723f;\n    features[21] = info->activity - 0.724643f;\n    features[22] = frame_stationarity - 0.743717f;\n    features[23] = info->tonality_slope + 0.069216f;\n    features[24] = tonal->lowECount - 0.067930f;\n\n    mlp_process(&net, features, frame_probs);\n    frame_probs[0] = .5f*(frame_probs[0]+1);\n    /* Curve fitting between the MLP probability and the actual probability */\n    /*frame_probs[0] = .01f + 1.21f*frame_probs[0]*frame_probs[0] - .23f*(float)pow(frame_probs[0], 10);*/\n    /* Probability of active audio (as opposed to silence) */\n    frame_probs[1] = .5f*frame_probs[1]+.5f;\n    frame_probs[1] *= frame_probs[1];\n\n    /* Probability of speech or music vs noise */\n    info->activity_probability = frame_probs[1];\n\n    /*printf(\"%f %f\\n\", frame_probs[0], frame_probs[1]);*/\n    {\n       /* Probability of state transition */\n       float tau;\n       /* Represents independence of the MLP probabilities, where\n          beta=1 means fully independent. */\n       float beta;\n       /* Denormalized probability of speech (p0) and music (p1) after update */\n       float p0, p1;\n       /* Probabilities for \"all speech\" and \"all music\" */\n       float s0, m0;\n       /* Probability sum for renormalisation */\n       float psum;\n       /* Instantaneous probability of speech and music, with beta pre-applied. */\n       float speech0;\n       float music0;\n       float p, q;\n\n       /* More silence transitions for speech than for music. */\n       tau = .001f*tonal->music_prob + .01f*(1-tonal->music_prob);\n       p = MAX16(.05f,MIN16(.95f,frame_probs[1]));\n       q = MAX16(.05f,MIN16(.95f,tonal->vad_prob));\n       beta = .02f+.05f*ABS16(p-q)/(p*(1-q)+q*(1-p));\n       /* p0 and p1 are the probabilities of speech and music at this frame\n          using only information from previous frame and applying the\n          state transition model */\n       p0 = (1-tonal->vad_prob)*(1-tau) +    tonal->vad_prob *tau;\n       p1 =    tonal->vad_prob *(1-tau) + (1-tonal->vad_prob)*tau;\n       /* We apply the current probability with exponent beta to work around\n          the fact that the probability estimates aren't independent. */\n       p0 *= (float)pow(1-frame_probs[1], beta);\n       p1 *= (float)pow(frame_probs[1], beta);\n       /* Normalise the probabilities to get the Marokv probability of music. */\n       tonal->vad_prob = p1/(p0+p1);\n       info->vad_prob = tonal->vad_prob;\n       /* Consider that silence has a 50-50 probability of being speech or music. */\n       frame_probs[0] = tonal->vad_prob*frame_probs[0] + (1-tonal->vad_prob)*.5f;\n\n       /* One transition every 3 minutes of active audio */\n       tau = .0001f;\n       /* Adapt beta based on how \"unexpected\" the new prob is */\n       p = MAX16(.05f,MIN16(.95f,frame_probs[0]));\n       q = MAX16(.05f,MIN16(.95f,tonal->music_prob));\n       beta = .02f+.05f*ABS16(p-q)/(p*(1-q)+q*(1-p));\n       /* p0 and p1 are the probabilities of speech and music at this frame\n          using only information from previous frame and applying the\n          state transition model */\n       p0 = (1-tonal->music_prob)*(1-tau) +    tonal->music_prob *tau;\n       p1 =    tonal->music_prob *(1-tau) + (1-tonal->music_prob)*tau;\n       /* We apply the current probability with exponent beta to work around\n          the fact that the probability estimates aren't independent. */\n       p0 *= (float)pow(1-frame_probs[0], beta);\n       p1 *= (float)pow(frame_probs[0], beta);\n       /* Normalise the probabilities to get the Marokv probability of music. */\n       tonal->music_prob = p1/(p0+p1);\n       info->music_prob = tonal->music_prob;\n\n       /*printf(\"%f %f %f %f\\n\", frame_probs[0], frame_probs[1], tonal->music_prob, tonal->vad_prob);*/\n       /* This chunk of code deals with delayed decision. */\n       psum=1e-20f;\n       /* Instantaneous probability of speech and music, with beta pre-applied. */\n       speech0 = (float)pow(1-frame_probs[0], beta);\n       music0  = (float)pow(frame_probs[0], beta);\n       if (tonal->count==1)\n       {\n          if (tonal->application == OPUS_APPLICATION_VOIP)\n             tonal->pmusic[0] = .1f;\n          else\n             tonal->pmusic[0] = .625f;\n          tonal->pspeech[0] = 1-tonal->pmusic[0];\n       }\n       /* Updated probability of having only speech (s0) or only music (m0),\n          before considering the new observation. */\n       s0 = tonal->pspeech[0] + tonal->pspeech[1];\n       m0 = tonal->pmusic [0] + tonal->pmusic [1];\n       /* Updates s0 and m0 with instantaneous probability. */\n       tonal->pspeech[0] = s0*(1-tau)*speech0;\n       tonal->pmusic [0] = m0*(1-tau)*music0;\n       /* Propagate the transition probabilities */\n       for (i=1;i<DETECT_SIZE-1;i++)\n       {\n          tonal->pspeech[i] = tonal->pspeech[i+1]*speech0;\n          tonal->pmusic [i] = tonal->pmusic [i+1]*music0;\n       }\n       /* Probability that the latest frame is speech, when all the previous ones were music. */\n       tonal->pspeech[DETECT_SIZE-1] = m0*tau*speech0;\n       /* Probability that the latest frame is music, when all the previous ones were speech. */\n       tonal->pmusic [DETECT_SIZE-1] = s0*tau*music0;\n\n       /* Renormalise probabilities to 1 */\n       for (i=0;i<DETECT_SIZE;i++)\n          psum += tonal->pspeech[i] + tonal->pmusic[i];\n       psum = 1.f/psum;\n       for (i=0;i<DETECT_SIZE;i++)\n       {\n          tonal->pspeech[i] *= psum;\n          tonal->pmusic [i] *= psum;\n       }\n       psum = tonal->pmusic[0];\n       for (i=1;i<DETECT_SIZE;i++)\n          psum += tonal->pspeech[i];\n\n       /* Estimate our confidence in the speech/music decisions */\n       if (frame_probs[1]>.75)\n       {\n          if (tonal->music_prob>.9)\n          {\n             float adapt;\n             adapt = 1.f/(++tonal->music_confidence_count);\n             tonal->music_confidence_count = IMIN(tonal->music_confidence_count, 500);\n             tonal->music_confidence += adapt*MAX16(-.2f,frame_probs[0]-tonal->music_confidence);\n          }\n          if (tonal->music_prob<.1)\n          {\n             float adapt;\n             adapt = 1.f/(++tonal->speech_confidence_count);\n             tonal->speech_confidence_count = IMIN(tonal->speech_confidence_count, 500);\n             tonal->speech_confidence += adapt*MIN16(.2f,frame_probs[0]-tonal->speech_confidence);\n          }\n       }\n    }\n    tonal->last_music = tonal->music_prob>.5f;\n#ifdef MLP_TRAINING\n    for (i=0;i<25;i++)\n       printf(\"%f \", features[i]);\n    printf(\"\\n\");\n#endif\n\n    info->bandwidth = bandwidth;\n    tonal->prev_bandwidth = bandwidth;\n    /*printf(\"%d %d\\n\", info->bandwidth, info->opus_bandwidth);*/\n    info->noisiness = frame_noisiness;\n    info->valid = 1;\n    RESTORE_STACK;\n}\n\nvoid run_analysis(TonalityAnalysisState *analysis, const CELTMode *celt_mode, const void *analysis_pcm,\n                 int analysis_frame_size, int frame_size, int c1, int c2, int C, opus_int32 Fs,\n                 int lsb_depth, downmix_func downmix, AnalysisInfo *analysis_info)\n{\n   int offset;\n   int pcm_len;\n\n   analysis_frame_size -= analysis_frame_size&1;\n   if (analysis_pcm != NULL)\n   {\n      /* Avoid overflow/wrap-around of the analysis buffer */\n      analysis_frame_size = IMIN((DETECT_SIZE-5)*Fs/50, analysis_frame_size);\n\n      pcm_len = analysis_frame_size - analysis->analysis_offset;\n      offset = analysis->analysis_offset;\n      while (pcm_len>0) {\n         tonality_analysis(analysis, celt_mode, analysis_pcm, IMIN(Fs/50, pcm_len), offset, c1, c2, C, lsb_depth, downmix);\n         offset += Fs/50;\n         pcm_len -= Fs/50;\n      }\n      analysis->analysis_offset = analysis_frame_size;\n\n      analysis->analysis_offset -= frame_size;\n   }\n\n   analysis_info->valid = 0;\n   tonality_get_info(analysis, analysis_info, frame_size);\n}\n\n#endif /* DISABLE_FLOAT_API */\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/analysis.h",
    "content": "/* Copyright (c) 2011 Xiph.Org Foundation\n   Written by Jean-Marc Valin */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR\n   CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifndef ANALYSIS_H\n#define ANALYSIS_H\n\n#include \"celt.h\"\n#include \"opus_private.h\"\n\n#define NB_FRAMES 8\n#define NB_TBANDS 18\n#define ANALYSIS_BUF_SIZE 720 /* 30 ms at 24 kHz */\n\n/* At that point we can stop counting frames because it no longer matters. */\n#define ANALYSIS_COUNT_MAX 10000\n\n#define DETECT_SIZE 100\n\n/* Uncomment this to print the MLP features on stdout. */\n/*#define MLP_TRAINING*/\n\ntypedef struct {\n   int arch;\n   int application;\n   opus_int32 Fs;\n#define TONALITY_ANALYSIS_RESET_START angle\n   float angle[240];\n   float d_angle[240];\n   float d2_angle[240];\n   opus_val32 inmem[ANALYSIS_BUF_SIZE];\n   int   mem_fill;                      /* number of usable samples in the buffer */\n   float prev_band_tonality[NB_TBANDS];\n   float prev_tonality;\n   int prev_bandwidth;\n   float E[NB_FRAMES][NB_TBANDS];\n   float logE[NB_FRAMES][NB_TBANDS];\n   float lowE[NB_TBANDS];\n   float highE[NB_TBANDS];\n   float meanE[NB_TBANDS+1];\n   float mem[32];\n   float cmean[8];\n   float std[9];\n   float music_prob;\n   float vad_prob;\n   float Etracker;\n   float lowECount;\n   int E_count;\n   int last_music;\n   int count;\n   int analysis_offset;\n   /** Probability of having speech for time i to DETECT_SIZE-1 (and music before).\n       pspeech[0] is the probability that all frames in the window are speech. */\n   float pspeech[DETECT_SIZE];\n   /** Probability of having music for time i to DETECT_SIZE-1 (and speech before).\n       pmusic[0] is the probability that all frames in the window are music. */\n   float pmusic[DETECT_SIZE];\n   float speech_confidence;\n   float music_confidence;\n   int speech_confidence_count;\n   int music_confidence_count;\n   int write_pos;\n   int read_pos;\n   int read_subframe;\n   float hp_ener_accum;\n   opus_val32 downmix_state[3];\n   AnalysisInfo info[DETECT_SIZE];\n} TonalityAnalysisState;\n\n/** Initialize a TonalityAnalysisState struct.\n *\n * This performs some possibly slow initialization steps which should\n * not be repeated every analysis step. No allocated memory is retained\n * by the state struct, so no cleanup call is required.\n */\nvoid tonality_analysis_init(TonalityAnalysisState *analysis, opus_int32 Fs);\n\n/** Reset a TonalityAnalysisState stuct.\n *\n * Call this when there's a discontinuity in the data.\n */\nvoid tonality_analysis_reset(TonalityAnalysisState *analysis);\n\nvoid tonality_get_info(TonalityAnalysisState *tonal, AnalysisInfo *info_out, int len);\n\nvoid run_analysis(TonalityAnalysisState *analysis, const CELTMode *celt_mode, const void *analysis_pcm,\n                 int analysis_frame_size, int frame_size, int c1, int c2, int C, opus_int32 Fs,\n                 int lsb_depth, downmix_func downmix, AnalysisInfo *analysis_info);\n\n#endif\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/mlp.c",
    "content": "/* Copyright (c) 2008-2011 Octasic Inc.\n   Written by Jean-Marc Valin */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR\n   CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include \"opus_types.h\"\n#include \"opus_defines.h\"\n\n#include <math.h>\n#include \"mlp.h\"\n#include \"arch.h\"\n#include \"tansig_table.h\"\n#define MAX_NEURONS 100\n\n#if 0\nstatic OPUS_INLINE opus_val16 tansig_approx(opus_val32 _x) /* Q19 */\n{\n    int i;\n    opus_val16 xx; /* Q11 */\n    /*double x, y;*/\n    opus_val16 dy, yy; /* Q14 */\n    /*x = 1.9073e-06*_x;*/\n    if (_x>=QCONST32(8,19))\n        return QCONST32(1.,14);\n    if (_x<=-QCONST32(8,19))\n        return -QCONST32(1.,14);\n    xx = EXTRACT16(SHR32(_x, 8));\n    /*i = lrint(25*x);*/\n    i = SHR32(ADD32(1024,MULT16_16(25, xx)),11);\n    /*x -= .04*i;*/\n    xx -= EXTRACT16(SHR32(MULT16_16(20972,i),8));\n    /*x = xx*(1./2048);*/\n    /*y = tansig_table[250+i];*/\n    yy = tansig_table[250+i];\n    /*y = yy*(1./16384);*/\n    dy = 16384-MULT16_16_Q14(yy,yy);\n    yy = yy + MULT16_16_Q14(MULT16_16_Q11(xx,dy),(16384 - MULT16_16_Q11(yy,xx)));\n    return yy;\n}\n#else\n/*extern const float tansig_table[501];*/\nstatic OPUS_INLINE float tansig_approx(float x)\n{\n    int i;\n    float y, dy;\n    float sign=1;\n    /* Tests are reversed to catch NaNs */\n    if (!(x<8))\n        return 1;\n    if (!(x>-8))\n        return -1;\n#ifndef FIXED_POINT\n    /* Another check in case of -ffast-math */\n    if (celt_isnan(x))\n       return 0;\n#endif\n    if (x<0)\n    {\n       x=-x;\n       sign=-1;\n    }\n    i = (int)floor(.5f+25*x);\n    x -= .04f*i;\n    y = tansig_table[i];\n    dy = 1-y*y;\n    y = y + x*dy*(1 - y*x);\n    return sign*y;\n}\n#endif\n\n#if 0\nvoid mlp_process(const MLP *m, const opus_val16 *in, opus_val16 *out)\n{\n    int j;\n    opus_val16 hidden[MAX_NEURONS];\n    const opus_val16 *W = m->weights;\n    /* Copy to tmp_in */\n    for (j=0;j<m->topo[1];j++)\n    {\n        int k;\n        opus_val32 sum = SHL32(EXTEND32(*W++),8);\n        for (k=0;k<m->topo[0];k++)\n            sum = MAC16_16(sum, in[k],*W++);\n        hidden[j] = tansig_approx(sum);\n    }\n    for (j=0;j<m->topo[2];j++)\n    {\n        int k;\n        opus_val32 sum = SHL32(EXTEND32(*W++),14);\n        for (k=0;k<m->topo[1];k++)\n            sum = MAC16_16(sum, hidden[k], *W++);\n        out[j] = tansig_approx(EXTRACT16(PSHR32(sum,17)));\n    }\n}\n#else\nvoid mlp_process(const MLP *m, const float *in, float *out)\n{\n    int j;\n    float hidden[MAX_NEURONS];\n    const float *W = m->weights;\n    /* Copy to tmp_in */\n    for (j=0;j<m->topo[1];j++)\n    {\n        int k;\n        float sum = *W++;\n        for (k=0;k<m->topo[0];k++)\n            sum = sum + in[k]**W++;\n        hidden[j] = tansig_approx(sum);\n    }\n    for (j=0;j<m->topo[2];j++)\n    {\n        int k;\n        float sum = *W++;\n        for (k=0;k<m->topo[1];k++)\n            sum = sum + hidden[k]**W++;\n        out[j] = tansig_approx(sum);\n    }\n}\n#endif\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/mlp.h",
    "content": "/* Copyright (c) 2008-2011 Octasic Inc.\n   Written by Jean-Marc Valin */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR\n   CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifndef _MLP_H_\n#define _MLP_H_\n\n#include \"arch.h\"\n\ntypedef struct {\n    int layers;\n    const int *topo;\n    const float *weights;\n} MLP;\n\nextern const MLP net;\n\nvoid mlp_process(const MLP *m, const float *in, float *out);\n\n#endif /* _MLP_H_ */\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/mlp_data.c",
    "content": "#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include \"mlp.h\"\n\n/* RMS error was 0.280492, seed was 1480478173 */\n/* 0.005976 0.031821 (0.280494 0.280492) done */\n\nstatic const float weights[450] = {\n\n/* hidden layer */\n-0.514624f, 0.0234227f, -0.14329f, -0.0878216f, -0.00187827f,\n-0.0257443f, 0.108524f, 0.00333881f, 0.00585017f, -0.0246132f,\n0.142723f, -0.00436494f, 0.0101354f, -0.11124f, -0.0809367f,\n-0.0750772f, 0.0295524f, 0.00823944f, 0.150392f, 0.0320876f,\n-0.0710564f, -1.43818f, 0.652076f, 0.0650744f, -1.54821f,\n0.168949f, -1.92724f, 0.0517976f, -0.0670737f, -0.0690121f,\n0.00247528f, -0.0522024f, 0.0631368f, 0.0532776f, 0.047751f,\n-0.011715f, 0.142374f, -0.0290885f, -0.279263f, -0.433499f,\n-0.0795174f, -0.380458f, -0.051263f, 0.218537f, -0.322478f,\n1.06667f, -0.104607f, -4.70108f, 0.312037f, 0.277397f,\n-2.71859f, 1.70037f, -0.141845f, 0.0115618f, 0.0629883f,\n0.0403871f, 0.0139428f, -0.00430733f, -0.0429038f, -0.0590318f,\n-0.0501526f, -0.0284802f, -0.0415686f, -0.0438999f, 0.0822666f,\n0.197194f, 0.0363275f, -0.0584307f, 0.0752364f, -0.0799796f,\n-0.146275f, 0.161661f, -0.184585f, 0.145568f, 0.442823f,\n1.61221f, 1.11162f, 2.62177f, -2.482f, -0.112599f,\n-0.110366f, -0.140794f, -0.181694f, 0.0648674f, 0.0842248f,\n0.0933993f, 0.150122f, 0.129171f, 0.176848f, 0.141758f,\n-0.271822f, 0.235113f, 0.0668579f, -0.433957f, 0.113633f,\n-0.169348f, -1.40091f, 0.62861f, -0.134236f, 0.402173f,\n1.86373f, 1.53998f, -4.32084f, 0.735343f, 0.800214f,\n-0.00968415f, 0.0425904f, 0.0196811f, -0.018426f, -0.000343953f,\n-0.00416389f, 0.00111558f, 0.0173069f, -0.00998596f, -0.025898f,\n0.00123764f, -0.00520373f, -0.0565033f, 0.0637394f, 0.0051213f,\n0.0221361f, 0.00819962f, -0.0467061f, -0.0548258f, -0.00314063f,\n-1.18332f, 1.88091f, -0.41148f, -2.95727f, -0.521449f,\n-0.271641f, 0.124946f, -0.0532936f, 0.101515f, 0.000208564f,\n-0.0488748f, 0.0642388f, -0.0383848f, 0.0135046f, -0.0413592f,\n-0.0326402f, -0.0137421f, -0.0225219f, -0.0917294f, -0.277759f,\n-0.185418f, 0.0471128f, -0.125879f, 0.262467f, -0.212794f,\n-0.112931f, -1.99885f, -0.404787f, 0.224402f, 0.637962f,\n-0.27808f, -0.0723953f, -0.0537655f, -0.0336359f, -0.0906601f,\n-0.0641309f, -0.0713542f, 0.0524317f, 0.00608819f, 0.0754101f,\n-0.0488401f, -0.00671865f, 0.0418239f, 0.0536284f, -0.132639f,\n0.0267648f, -0.248432f, -0.0104153f, 0.035544f, -0.212753f,\n-0.302895f, -0.0357854f, 0.376838f, 0.597025f, -0.664647f,\n0.268422f, -0.376772f, -1.05472f, 0.0144178f, 0.179122f,\n0.0360155f, 0.220262f, -0.0056381f, 0.0317197f, 0.0621066f,\n-0.00779298f, 0.00789378f, 0.00350605f, 0.0104809f, 0.0362871f,\n-0.157708f, -0.0659779f, -0.0926278f, 0.00770791f, 0.0631621f,\n0.0817343f, -0.424295f, -0.0437727f, -0.24251f, 0.711217f,\n-0.736455f, -2.194f, -0.107612f, -0.175156f, -0.0366573f,\n-0.0123156f, -0.0628516f, -0.0218977f, -0.00693699f, 0.00695185f,\n0.00507362f, 0.00359334f, 0.0052661f, 0.035561f, 0.0382701f,\n0.0342179f, -0.00790271f, -0.0170925f, 0.047029f, 0.0197362f,\n-0.0153435f, 0.0644152f, -0.36862f, -0.0674876f, -2.82672f,\n1.34122f, -0.0788029f, -3.47792f, 0.507246f, -0.816378f,\n-0.0142383f, -0.127349f, -0.106926f, -0.0359524f, 0.105045f,\n0.291554f, 0.195413f, 0.0866214f, -0.066577f, -0.102188f,\n0.0979466f, -0.12982f, 0.400181f, -0.409336f, -0.0593326f,\n-0.0656203f, -0.204474f, 0.179802f, 0.000509084f, 0.0995954f,\n-2.377f, -0.686359f, 0.934861f, 1.10261f, 1.3901f,\n-4.33616f, -0.00264017f, 0.00713045f, 0.106264f, 0.143726f,\n-0.0685305f, -0.054656f, -0.0176725f, -0.0772669f, -0.0264526f,\n-0.0103824f, -0.0269872f, -0.00687f, 0.225804f, 0.407751f,\n-0.0612611f, -0.0576863f, -0.180131f, -0.222772f, -0.461742f,\n0.335236f, 1.03399f, 4.24112f, -0.345796f, -0.594549f,\n-76.1407f, -0.265276f, 0.0507719f, 0.0643044f, 0.0384832f,\n0.0424459f, -0.0387817f, -0.0235996f, -0.0740556f, -0.0270029f,\n0.00882177f, -0.0552371f, -0.00485851f, 0.314295f, 0.360431f,\n-0.0787085f, 0.110355f, -0.415958f, -0.385088f, -0.272224f,\n-1.55108f, -0.141848f, 0.448877f, -0.563447f, -2.31403f,\n-0.120077f, -1.49918f, -0.817726f, -0.0495854f, -0.0230782f,\n-0.0224014f, 0.117076f, 0.0393216f, 0.051997f, 0.0330763f,\n-0.110796f, 0.0211117f, -0.0197258f, 0.0187461f, 0.0125183f,\n0.14876f, 0.0920565f, -0.342475f, 0.135272f, -0.168155f,\n-0.033423f, -0.0604611f, -0.128835f, 0.664947f, -0.144997f,\n2.27649f, 1.28663f, 0.841217f, -2.42807f, 0.0230471f,\n0.226709f, -0.0374803f, 0.155436f, 0.0400342f, -0.184686f,\n0.128488f, -0.0939518f, -0.0578559f, 0.0265967f, -0.0999322f,\n-0.0322768f, -0.322994f, -0.189371f, -0.738069f, -0.0754914f,\n0.214717f, -0.093728f, -0.695741f, 0.0899298f, -2.06188f,\n-0.273719f, -0.896977f, 0.130553f, 0.134638f, 1.29355f,\n0.00520749f, -0.0324224f, 0.00530451f, 0.0192385f, 0.00328708f,\n0.0250838f, 0.0053365f, -0.0177321f, 0.00618789f, 0.00525364f,\n0.00104596f, -0.0360459f, 0.0402403f, -0.0406351f, 0.0136883f,\n0.0880722f, -0.0197449f, 0.089938f, 0.0100456f, -0.0475638f,\n-0.73267f, 0.037433f, -0.146551f, -0.230221f, -3.06489f,\n-1.40194f, 0.0198483f, 0.0397953f, -0.0190239f, 0.0470715f,\n-0.131363f, -0.191721f, -0.0176224f, -0.0480352f, -0.221799f,\n-0.26794f, -0.0292615f, 0.0612127f, -0.129877f, 0.00628332f,\n-0.085918f, 0.0175379f, 0.0541011f, -0.0810874f, -0.380809f,\n-0.222056f, -0.508859f, -0.473369f, 0.484958f, -2.28411f,\n0.0139516f,\n/* output layer */\n3.90017f, 1.71789f, -1.43372f, -2.70839f, 1.77107f,\n5.48006f, 1.44661f, 2.01134f, -1.88383f, -3.64958f,\n-1.26351f, 0.779421f, 2.11357f, 3.10409f, 1.68846f,\n-4.46197f, -1.61455f, 3.59832f, 2.43531f, -1.26458f,\n0.417941f, 1.47437f, 2.16635f, -1.909f, -0.828869f,\n1.38805f, -2.67975f, -0.110044f, 1.95596f, 0.697931f,\n-0.313226f, -0.889315f, 0.283236f, 0.946102f, };\n\nstatic const int topo[3] = {25, 16, 2};\n\nconst MLP net = {\n    3,\n    topo,\n    weights\n};\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/opus.c",
    "content": "/* Copyright (c) 2011 Xiph.Org Foundation, Skype Limited\n   Written by Jean-Marc Valin and Koen Vos */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include \"opus.h\"\n#include \"opus_private.h\"\n\n#ifndef DISABLE_FLOAT_API\nOPUS_EXPORT void opus_pcm_soft_clip(float *_x, int N, int C, float *declip_mem)\n{\n   int c;\n   int i;\n   float *x;\n\n   if (C<1 || N<1 || !_x || !declip_mem) return;\n\n   /* First thing: saturate everything to +/- 2 which is the highest level our\n      non-linearity can handle. At the point where the signal reaches +/-2,\n      the derivative will be zero anyway, so this doesn't introduce any\n      discontinuity in the derivative. */\n   for (i=0;i<N*C;i++)\n      _x[i] = MAX16(-2.f, MIN16(2.f, _x[i]));\n   for (c=0;c<C;c++)\n   {\n      float a;\n      float x0;\n      int curr;\n\n      x = _x+c;\n      a = declip_mem[c];\n      /* Continue applying the non-linearity from the previous frame to avoid\n         any discontinuity. */\n      for (i=0;i<N;i++)\n      {\n         if (x[i*C]*a>=0)\n            break;\n         x[i*C] = x[i*C]+a*x[i*C]*x[i*C];\n      }\n\n      curr=0;\n      x0 = x[0];\n      while(1)\n      {\n         int start, end;\n         float maxval;\n         int special=0;\n         int peak_pos;\n         for (i=curr;i<N;i++)\n         {\n            if (x[i*C]>1 || x[i*C]<-1)\n               break;\n         }\n         if (i==N)\n         {\n            a=0;\n            break;\n         }\n         peak_pos = i;\n         start=end=i;\n         maxval=ABS16(x[i*C]);\n         /* Look for first zero crossing before clipping */\n         while (start>0 && x[i*C]*x[(start-1)*C]>=0)\n            start--;\n         /* Look for first zero crossing after clipping */\n         while (end<N && x[i*C]*x[end*C]>=0)\n         {\n            /* Look for other peaks until the next zero-crossing. */\n            if (ABS16(x[end*C])>maxval)\n            {\n               maxval = ABS16(x[end*C]);\n               peak_pos = end;\n            }\n            end++;\n         }\n         /* Detect the special case where we clip before the first zero crossing */\n         special = (start==0 && x[i*C]*x[0]>=0);\n\n         /* Compute a such that maxval + a*maxval^2 = 1 */\n         a=(maxval-1)/(maxval*maxval);\n         /* Slightly boost \"a\" by 2^-22. This is just enough to ensure -ffast-math\n            does not cause output values larger than +/-1, but small enough not\n            to matter even for 24-bit output.  */\n         a += a*2.4e-7f;\n         if (x[i*C]>0)\n            a = -a;\n         /* Apply soft clipping */\n         for (i=start;i<end;i++)\n            x[i*C] = x[i*C]+a*x[i*C]*x[i*C];\n\n         if (special && peak_pos>=2)\n         {\n            /* Add a linear ramp from the first sample to the signal peak.\n               This avoids a discontinuity at the beginning of the frame. */\n            float delta;\n            float offset = x0-x[0];\n            delta = offset / peak_pos;\n            for (i=curr;i<peak_pos;i++)\n            {\n               offset -= delta;\n               x[i*C] += offset;\n               x[i*C] = MAX16(-1.f, MIN16(1.f, x[i*C]));\n            }\n         }\n         curr = end;\n         if (curr==N)\n            break;\n      }\n      declip_mem[c] = a;\n   }\n}\n#endif\n\nint encode_size(int size, unsigned char *data)\n{\n   if (size < 252)\n   {\n      data[0] = size;\n      return 1;\n   } else {\n      data[0] = 252+(size&0x3);\n      data[1] = (size-(int)data[0])>>2;\n      return 2;\n   }\n}\n\nstatic int parse_size(const unsigned char *data, opus_int32 len, opus_int16 *size)\n{\n   if (len<1)\n   {\n      *size = -1;\n      return -1;\n   } else if (data[0]<252)\n   {\n      *size = data[0];\n      return 1;\n   } else if (len<2)\n   {\n      *size = -1;\n      return -1;\n   } else {\n      *size = 4*data[1] + data[0];\n      return 2;\n   }\n}\n\nint opus_packet_get_samples_per_frame(const unsigned char *data,\n      opus_int32 Fs)\n{\n   int audiosize;\n   if (data[0]&0x80)\n   {\n      audiosize = ((data[0]>>3)&0x3);\n      audiosize = (Fs<<audiosize)/400;\n   } else if ((data[0]&0x60) == 0x60)\n   {\n      audiosize = (data[0]&0x08) ? Fs/50 : Fs/100;\n   } else {\n      audiosize = ((data[0]>>3)&0x3);\n      if (audiosize == 3)\n         audiosize = Fs*60/1000;\n      else\n         audiosize = (Fs<<audiosize)/100;\n   }\n   return audiosize;\n}\n\nint opus_packet_parse_impl(const unsigned char *data, opus_int32 len,\n      int self_delimited, unsigned char *out_toc,\n      const unsigned char *frames[48], opus_int16 size[48],\n      int *payload_offset, opus_int32 *packet_offset)\n{\n   int i, bytes;\n   int count;\n   int cbr;\n   unsigned char ch, toc;\n   int framesize;\n   opus_int32 last_size;\n   opus_int32 pad = 0;\n   const unsigned char *data0 = data;\n\n   if (size==NULL || len<0)\n      return OPUS_BAD_ARG;\n   if (len==0)\n      return OPUS_INVALID_PACKET;\n\n   framesize = opus_packet_get_samples_per_frame(data, 48000);\n\n   cbr = 0;\n   toc = *data++;\n   len--;\n   last_size = len;\n   switch (toc&0x3)\n   {\n   /* One frame */\n   case 0:\n      count=1;\n      break;\n   /* Two CBR frames */\n   case 1:\n      count=2;\n      cbr = 1;\n      if (!self_delimited)\n      {\n         if (len&0x1)\n            return OPUS_INVALID_PACKET;\n         last_size = len/2;\n         /* If last_size doesn't fit in size[0], we'll catch it later */\n         size[0] = (opus_int16)last_size;\n      }\n      break;\n   /* Two VBR frames */\n   case 2:\n      count = 2;\n      bytes = parse_size(data, len, size);\n      len -= bytes;\n      if (size[0]<0 || size[0] > len)\n         return OPUS_INVALID_PACKET;\n      data += bytes;\n      last_size = len-size[0];\n      break;\n   /* Multiple CBR/VBR frames (from 0 to 120 ms) */\n   default: /*case 3:*/\n      if (len<1)\n         return OPUS_INVALID_PACKET;\n      /* Number of frames encoded in bits 0 to 5 */\n      ch = *data++;\n      count = ch&0x3F;\n      if (count <= 0 || framesize*count > 5760)\n         return OPUS_INVALID_PACKET;\n      len--;\n      /* Padding flag is bit 6 */\n      if (ch&0x40)\n      {\n         int p;\n         do {\n            int tmp;\n            if (len<=0)\n               return OPUS_INVALID_PACKET;\n            p = *data++;\n            len--;\n            tmp = p==255 ? 254: p;\n            len -= tmp;\n            pad += tmp;\n         } while (p==255);\n      }\n      if (len<0)\n         return OPUS_INVALID_PACKET;\n      /* VBR flag is bit 7 */\n      cbr = !(ch&0x80);\n      if (!cbr)\n      {\n         /* VBR case */\n         last_size = len;\n         for (i=0;i<count-1;i++)\n         {\n            bytes = parse_size(data, len, size+i);\n            len -= bytes;\n            if (size[i]<0 || size[i] > len)\n               return OPUS_INVALID_PACKET;\n            data += bytes;\n            last_size -= bytes+size[i];\n         }\n         if (last_size<0)\n            return OPUS_INVALID_PACKET;\n      } else if (!self_delimited)\n      {\n         /* CBR case */\n         last_size = len/count;\n         if (last_size*count!=len)\n            return OPUS_INVALID_PACKET;\n         for (i=0;i<count-1;i++)\n            size[i] = (opus_int16)last_size;\n      }\n      break;\n   }\n   /* Self-delimited framing has an extra size for the last frame. */\n   if (self_delimited)\n   {\n      bytes = parse_size(data, len, size+count-1);\n      len -= bytes;\n      if (size[count-1]<0 || size[count-1] > len)\n         return OPUS_INVALID_PACKET;\n      data += bytes;\n      /* For CBR packets, apply the size to all the frames. */\n      if (cbr)\n      {\n         if (size[count-1]*count > len)\n            return OPUS_INVALID_PACKET;\n         for (i=0;i<count-1;i++)\n            size[i] = size[count-1];\n      } else if (bytes+size[count-1] > last_size)\n         return OPUS_INVALID_PACKET;\n   } else\n   {\n      /* Because it's not encoded explicitly, it's possible the size of the\n         last packet (or all the packets, for the CBR case) is larger than\n         1275. Reject them here.*/\n      if (last_size > 1275)\n         return OPUS_INVALID_PACKET;\n      size[count-1] = (opus_int16)last_size;\n   }\n\n   if (payload_offset)\n      *payload_offset = (int)(data-data0);\n\n   for (i=0;i<count;i++)\n   {\n      if (frames)\n         frames[i] = data;\n      data += size[i];\n   }\n\n   if (packet_offset)\n      *packet_offset = pad+(opus_int32)(data-data0);\n\n   if (out_toc)\n      *out_toc = toc;\n\n   return count;\n}\n\nint opus_packet_parse(const unsigned char *data, opus_int32 len,\n      unsigned char *out_toc, const unsigned char *frames[48],\n      opus_int16 size[48], int *payload_offset)\n{\n   return opus_packet_parse_impl(data, len, 0, out_toc,\n                                 frames, size, payload_offset, NULL);\n}\n\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/opus_compare.c",
    "content": "/* Copyright (c) 2011-2012 Xiph.Org Foundation, Mozilla Corporation\n   Written by Jean-Marc Valin and Timothy B. Terriberry */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <math.h>\n#include <string.h>\n\n#define OPUS_PI (3.14159265F)\n\n#define OPUS_COSF(_x)        ((float)cos(_x))\n#define OPUS_SINF(_x)        ((float)sin(_x))\n\nstatic void *check_alloc(void *_ptr){\n  if(_ptr==NULL){\n    fprintf(stderr,\"Out of memory.\\n\");\n    exit(EXIT_FAILURE);\n  }\n  return _ptr;\n}\n\nstatic void *opus_malloc(size_t _size){\n  return check_alloc(malloc(_size));\n}\n\nstatic void *opus_realloc(void *_ptr,size_t _size){\n  return check_alloc(realloc(_ptr,_size));\n}\n\nstatic size_t read_pcm16(float **_samples,FILE *_fin,int _nchannels){\n  unsigned char  buf[1024];\n  float         *samples;\n  size_t         nsamples;\n  size_t         csamples;\n  size_t         xi;\n  size_t         nread;\n  samples=NULL;\n  nsamples=csamples=0;\n  for(;;){\n    nread=fread(buf,2*_nchannels,1024/(2*_nchannels),_fin);\n    if(nread<=0)break;\n    if(nsamples+nread>csamples){\n      do csamples=csamples<<1|1;\n      while(nsamples+nread>csamples);\n      samples=(float *)opus_realloc(samples,\n       _nchannels*csamples*sizeof(*samples));\n    }\n    for(xi=0;xi<nread;xi++){\n      int ci;\n      for(ci=0;ci<_nchannels;ci++){\n        int s;\n        s=buf[2*(xi*_nchannels+ci)+1]<<8|buf[2*(xi*_nchannels+ci)];\n        s=((s&0xFFFF)^0x8000)-0x8000;\n        samples[(nsamples+xi)*_nchannels+ci]=s;\n      }\n    }\n    nsamples+=nread;\n  }\n  *_samples=(float *)opus_realloc(samples,\n   _nchannels*nsamples*sizeof(*samples));\n  return nsamples;\n}\n\nstatic void band_energy(float *_out,float *_ps,const int *_bands,int _nbands,\n const float *_in,int _nchannels,size_t _nframes,int _window_sz,\n int _step,int _downsample){\n  float *window;\n  float *x;\n  float *c;\n  float *s;\n  size_t xi;\n  int    xj;\n  int    ps_sz;\n  window=(float *)opus_malloc((3+_nchannels)*_window_sz*sizeof(*window));\n  c=window+_window_sz;\n  s=c+_window_sz;\n  x=s+_window_sz;\n  ps_sz=_window_sz/2;\n  for(xj=0;xj<_window_sz;xj++){\n    window[xj]=0.5F-0.5F*OPUS_COSF((2*OPUS_PI/(_window_sz-1))*xj);\n  }\n  for(xj=0;xj<_window_sz;xj++){\n    c[xj]=OPUS_COSF((2*OPUS_PI/_window_sz)*xj);\n  }\n  for(xj=0;xj<_window_sz;xj++){\n    s[xj]=OPUS_SINF((2*OPUS_PI/_window_sz)*xj);\n  }\n  for(xi=0;xi<_nframes;xi++){\n    int ci;\n    int xk;\n    int bi;\n    for(ci=0;ci<_nchannels;ci++){\n      for(xk=0;xk<_window_sz;xk++){\n        x[ci*_window_sz+xk]=window[xk]*_in[(xi*_step+xk)*_nchannels+ci];\n      }\n    }\n    for(bi=xj=0;bi<_nbands;bi++){\n      float p[2]={0};\n      for(;xj<_bands[bi+1];xj++){\n        for(ci=0;ci<_nchannels;ci++){\n          float re;\n          float im;\n          int   ti;\n          ti=0;\n          re=im=0;\n          for(xk=0;xk<_window_sz;xk++){\n            re+=c[ti]*x[ci*_window_sz+xk];\n            im-=s[ti]*x[ci*_window_sz+xk];\n            ti+=xj;\n            if(ti>=_window_sz)ti-=_window_sz;\n          }\n          re*=_downsample;\n          im*=_downsample;\n          _ps[(xi*ps_sz+xj)*_nchannels+ci]=re*re+im*im+100000;\n          p[ci]+=_ps[(xi*ps_sz+xj)*_nchannels+ci];\n        }\n      }\n      if(_out){\n        _out[(xi*_nbands+bi)*_nchannels]=p[0]/(_bands[bi+1]-_bands[bi]);\n        if(_nchannels==2){\n          _out[(xi*_nbands+bi)*_nchannels+1]=p[1]/(_bands[bi+1]-_bands[bi]);\n        }\n      }\n    }\n  }\n  free(window);\n}\n\n#define NBANDS (21)\n#define NFREQS (240)\n\n/*Bands on which we compute the pseudo-NMR (Bark-derived\n  CELT bands).*/\nstatic const int BANDS[NBANDS+1]={\n  0,2,4,6,8,10,12,14,16,20,24,28,32,40,48,56,68,80,96,120,156,200\n};\n\n#define TEST_WIN_SIZE (480)\n#define TEST_WIN_STEP (120)\n\nint main(int _argc,const char **_argv){\n  FILE    *fin1;\n  FILE    *fin2;\n  float   *x;\n  float   *y;\n  float   *xb;\n  float   *X;\n  float   *Y;\n  double    err;\n  float    Q;\n  size_t   xlength;\n  size_t   ylength;\n  size_t   nframes;\n  size_t   xi;\n  int      ci;\n  int      xj;\n  int      bi;\n  int      nchannels;\n  unsigned rate;\n  int      downsample;\n  int      ybands;\n  int      yfreqs;\n  int      max_compare;\n  if(_argc<3||_argc>6){\n    fprintf(stderr,\"Usage: %s [-s] [-r rate2] <file1.sw> <file2.sw>\\n\",\n     _argv[0]);\n    return EXIT_FAILURE;\n  }\n  nchannels=1;\n  if(strcmp(_argv[1],\"-s\")==0){\n    nchannels=2;\n    _argv++;\n  }\n  rate=48000;\n  ybands=NBANDS;\n  yfreqs=NFREQS;\n  downsample=1;\n  if(strcmp(_argv[1],\"-r\")==0){\n    rate=atoi(_argv[2]);\n    if(rate!=8000&&rate!=12000&&rate!=16000&&rate!=24000&&rate!=48000){\n      fprintf(stderr,\n       \"Sampling rate must be 8000, 12000, 16000, 24000, or 48000\\n\");\n      return EXIT_FAILURE;\n    }\n    downsample=48000/rate;\n    switch(rate){\n      case  8000:ybands=13;break;\n      case 12000:ybands=15;break;\n      case 16000:ybands=17;break;\n      case 24000:ybands=19;break;\n    }\n    yfreqs=NFREQS/downsample;\n    _argv+=2;\n  }\n  fin1=fopen(_argv[1],\"rb\");\n  if(fin1==NULL){\n    fprintf(stderr,\"Error opening '%s'.\\n\",_argv[1]);\n    return EXIT_FAILURE;\n  }\n  fin2=fopen(_argv[2],\"rb\");\n  if(fin2==NULL){\n    fprintf(stderr,\"Error opening '%s'.\\n\",_argv[2]);\n    fclose(fin1);\n    return EXIT_FAILURE;\n  }\n  /*Read in the data and allocate scratch space.*/\n  xlength=read_pcm16(&x,fin1,2);\n  if(nchannels==1){\n    for(xi=0;xi<xlength;xi++)x[xi]=.5*(x[2*xi]+x[2*xi+1]);\n  }\n  fclose(fin1);\n  ylength=read_pcm16(&y,fin2,nchannels);\n  fclose(fin2);\n  if(xlength!=ylength*downsample){\n    fprintf(stderr,\"Sample counts do not match (%lu!=%lu).\\n\",\n     (unsigned long)xlength,(unsigned long)ylength*downsample);\n    return EXIT_FAILURE;\n  }\n  if(xlength<TEST_WIN_SIZE){\n    fprintf(stderr,\"Insufficient sample data (%lu<%i).\\n\",\n     (unsigned long)xlength,TEST_WIN_SIZE);\n    return EXIT_FAILURE;\n  }\n  nframes=(xlength-TEST_WIN_SIZE+TEST_WIN_STEP)/TEST_WIN_STEP;\n  xb=(float *)opus_malloc(nframes*NBANDS*nchannels*sizeof(*xb));\n  X=(float *)opus_malloc(nframes*NFREQS*nchannels*sizeof(*X));\n  Y=(float *)opus_malloc(nframes*yfreqs*nchannels*sizeof(*Y));\n  /*Compute the per-band spectral energy of the original signal\n     and the error.*/\n  band_energy(xb,X,BANDS,NBANDS,x,nchannels,nframes,\n   TEST_WIN_SIZE,TEST_WIN_STEP,1);\n  free(x);\n  band_energy(NULL,Y,BANDS,ybands,y,nchannels,nframes,\n   TEST_WIN_SIZE/downsample,TEST_WIN_STEP/downsample,downsample);\n  free(y);\n  for(xi=0;xi<nframes;xi++){\n    /*Frequency masking (low to high): 10 dB/Bark slope.*/\n    for(bi=1;bi<NBANDS;bi++){\n      for(ci=0;ci<nchannels;ci++){\n        xb[(xi*NBANDS+bi)*nchannels+ci]+=\n         0.1F*xb[(xi*NBANDS+bi-1)*nchannels+ci];\n      }\n    }\n    /*Frequency masking (high to low): 15 dB/Bark slope.*/\n    for(bi=NBANDS-1;bi-->0;){\n      for(ci=0;ci<nchannels;ci++){\n        xb[(xi*NBANDS+bi)*nchannels+ci]+=\n         0.03F*xb[(xi*NBANDS+bi+1)*nchannels+ci];\n      }\n    }\n    if(xi>0){\n      /*Temporal masking: -3 dB/2.5ms slope.*/\n      for(bi=0;bi<NBANDS;bi++){\n        for(ci=0;ci<nchannels;ci++){\n          xb[(xi*NBANDS+bi)*nchannels+ci]+=\n           0.5F*xb[((xi-1)*NBANDS+bi)*nchannels+ci];\n        }\n      }\n    }\n    /* Allowing some cross-talk */\n    if(nchannels==2){\n      for(bi=0;bi<NBANDS;bi++){\n        float l,r;\n        l=xb[(xi*NBANDS+bi)*nchannels+0];\n        r=xb[(xi*NBANDS+bi)*nchannels+1];\n        xb[(xi*NBANDS+bi)*nchannels+0]+=0.01F*r;\n        xb[(xi*NBANDS+bi)*nchannels+1]+=0.01F*l;\n      }\n    }\n\n    /* Apply masking */\n    for(bi=0;bi<ybands;bi++){\n      for(xj=BANDS[bi];xj<BANDS[bi+1];xj++){\n        for(ci=0;ci<nchannels;ci++){\n          X[(xi*NFREQS+xj)*nchannels+ci]+=\n           0.1F*xb[(xi*NBANDS+bi)*nchannels+ci];\n          Y[(xi*yfreqs+xj)*nchannels+ci]+=\n           0.1F*xb[(xi*NBANDS+bi)*nchannels+ci];\n        }\n      }\n    }\n  }\n\n  /* Average of consecutive frames to make comparison slightly less sensitive */\n  for(bi=0;bi<ybands;bi++){\n    for(xj=BANDS[bi];xj<BANDS[bi+1];xj++){\n      for(ci=0;ci<nchannels;ci++){\n         float xtmp;\n         float ytmp;\n         xtmp = X[xj*nchannels+ci];\n         ytmp = Y[xj*nchannels+ci];\n         for(xi=1;xi<nframes;xi++){\n           float xtmp2;\n           float ytmp2;\n           xtmp2 = X[(xi*NFREQS+xj)*nchannels+ci];\n           ytmp2 = Y[(xi*yfreqs+xj)*nchannels+ci];\n           X[(xi*NFREQS+xj)*nchannels+ci] += xtmp;\n           Y[(xi*yfreqs+xj)*nchannels+ci] += ytmp;\n           xtmp = xtmp2;\n           ytmp = ytmp2;\n         }\n      }\n    }\n  }\n\n  /*If working at a lower sampling rate, don't take into account the last\n     300 Hz to allow for different transition bands.\n    For 12 kHz, we don't skip anything, because the last band already skips\n     400 Hz.*/\n  if(rate==48000)max_compare=BANDS[NBANDS];\n  else if(rate==12000)max_compare=BANDS[ybands];\n  else max_compare=BANDS[ybands]-3;\n  err=0;\n  for(xi=0;xi<nframes;xi++){\n    double Ef;\n    Ef=0;\n    for(bi=0;bi<ybands;bi++){\n      double Eb;\n      Eb=0;\n      for(xj=BANDS[bi];xj<BANDS[bi+1]&&xj<max_compare;xj++){\n        for(ci=0;ci<nchannels;ci++){\n          float re;\n          float im;\n          re=Y[(xi*yfreqs+xj)*nchannels+ci]/X[(xi*NFREQS+xj)*nchannels+ci];\n          im=re-log(re)-1;\n          /*Make comparison less sensitive around the SILK/CELT cross-over to\n            allow for mode freedom in the filters.*/\n          if(xj>=79&&xj<=81)im*=0.1F;\n          if(xj==80)im*=0.1F;\n          Eb+=im;\n        }\n      }\n      Eb /= (BANDS[bi+1]-BANDS[bi])*nchannels;\n      Ef += Eb*Eb;\n    }\n    /*Using a fixed normalization value means we're willing to accept slightly\n       lower quality for lower sampling rates.*/\n    Ef/=NBANDS;\n    Ef*=Ef;\n    err+=Ef*Ef;\n  }\n  free(xb);\n  free(X);\n  free(Y);\n  err=pow(err/nframes,1.0/16);\n  Q=100*(1-0.5*log(1+err)/log(1.13));\n  if(Q<0){\n    fprintf(stderr,\"Test vector FAILS\\n\");\n    fprintf(stderr,\"Internal weighted error is %f\\n\",err);\n    return EXIT_FAILURE;\n  }\n  else{\n    fprintf(stderr,\"Test vector PASSES\\n\");\n    fprintf(stderr,\n     \"Opus quality metric: %.1f %% (internal weighted error is %f)\\n\",Q,err);\n    return EXIT_SUCCESS;\n  }\n}\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/opus_decoder.c",
    "content": "/* Copyright (c) 2010 Xiph.Org Foundation, Skype Limited\n   Written by Jean-Marc Valin and Koen Vos */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifdef HAVE_CONFIG_H\n# include \"config.h\"\n#endif\n\n#ifndef OPUS_BUILD\n# error \"OPUS_BUILD _MUST_ be defined to build Opus. This probably means you need other defines as well, as in a config.h. See the included build files for details.\"\n#endif\n\n#if defined(__GNUC__) && (__GNUC__ >= 2) && !defined(__OPTIMIZE__) && !defined(OPUS_WILL_BE_SLOW)\n# pragma message \"You appear to be compiling without optimization, if so opus will be very slow.\"\n#endif\n\n#include <stdarg.h>\n#include \"celt.h\"\n#include \"opus.h\"\n#include \"entdec.h\"\n#include \"modes.h\"\n#include \"API.h\"\n#include \"stack_alloc.h\"\n#include \"float_cast.h\"\n#include \"opus_private.h\"\n#include \"os_support.h\"\n#include \"structs.h\"\n#include \"define.h\"\n#include \"mathops.h\"\n#include \"cpu_support.h\"\n\nstruct OpusDecoder {\n   int          celt_dec_offset;\n   int          silk_dec_offset;\n   int          channels;\n   opus_int32   Fs;          /** Sampling rate (at the API level) */\n   silk_DecControlStruct DecControl;\n   int          decode_gain;\n   int          arch;\n\n   /* Everything beyond this point gets cleared on a reset */\n#define OPUS_DECODER_RESET_START stream_channels\n   int          stream_channels;\n\n   int          bandwidth;\n   int          mode;\n   int          prev_mode;\n   int          frame_size;\n   int          prev_redundancy;\n   int          last_packet_duration;\n#ifndef FIXED_POINT\n   opus_val16   softclip_mem[2];\n#endif\n\n   opus_uint32  rangeFinal;\n};\n\n\nint opus_decoder_get_size(int channels)\n{\n   int silkDecSizeBytes, celtDecSizeBytes;\n   int ret;\n   if (channels<1 || channels > 2)\n      return 0;\n   ret = silk_Get_Decoder_Size( &silkDecSizeBytes );\n   if(ret)\n      return 0;\n   silkDecSizeBytes = align(silkDecSizeBytes);\n   celtDecSizeBytes = celt_decoder_get_size(channels);\n   return align(sizeof(OpusDecoder))+silkDecSizeBytes+celtDecSizeBytes;\n}\n\nint opus_decoder_init(OpusDecoder *st, opus_int32 Fs, int channels)\n{\n   void *silk_dec;\n   CELTDecoder *celt_dec;\n   int ret, silkDecSizeBytes;\n\n   if ((Fs!=48000&&Fs!=24000&&Fs!=16000&&Fs!=12000&&Fs!=8000)\n    || (channels!=1&&channels!=2))\n      return OPUS_BAD_ARG;\n\n   OPUS_CLEAR((char*)st, opus_decoder_get_size(channels));\n   /* Initialize SILK encoder */\n   ret = silk_Get_Decoder_Size(&silkDecSizeBytes);\n   if (ret)\n      return OPUS_INTERNAL_ERROR;\n\n   silkDecSizeBytes = align(silkDecSizeBytes);\n   st->silk_dec_offset = align(sizeof(OpusDecoder));\n   st->celt_dec_offset = st->silk_dec_offset+silkDecSizeBytes;\n   silk_dec = (char*)st+st->silk_dec_offset;\n   celt_dec = (CELTDecoder*)((char*)st+st->celt_dec_offset);\n   st->stream_channels = st->channels = channels;\n\n   st->Fs = Fs;\n   st->DecControl.API_sampleRate = st->Fs;\n   st->DecControl.nChannelsAPI      = st->channels;\n\n   /* Reset decoder */\n   ret = silk_InitDecoder( silk_dec );\n   if(ret)return OPUS_INTERNAL_ERROR;\n\n   /* Initialize CELT decoder */\n   ret = celt_decoder_init(celt_dec, Fs, channels);\n   if(ret!=OPUS_OK)return OPUS_INTERNAL_ERROR;\n\n   celt_decoder_ctl(celt_dec, CELT_SET_SIGNALLING(0));\n\n   st->prev_mode = 0;\n   st->frame_size = Fs/400;\n   st->arch = opus_select_arch();\n   return OPUS_OK;\n}\n\nOpusDecoder *opus_decoder_create(opus_int32 Fs, int channels, int *error)\n{\n   int ret;\n   OpusDecoder *st;\n   if ((Fs!=48000&&Fs!=24000&&Fs!=16000&&Fs!=12000&&Fs!=8000)\n    || (channels!=1&&channels!=2))\n   {\n      if (error)\n         *error = OPUS_BAD_ARG;\n      return NULL;\n   }\n   st = (OpusDecoder *)opus_alloc(opus_decoder_get_size(channels));\n   if (st == NULL)\n   {\n      if (error)\n         *error = OPUS_ALLOC_FAIL;\n      return NULL;\n   }\n   ret = opus_decoder_init(st, Fs, channels);\n   if (error)\n      *error = ret;\n   if (ret != OPUS_OK)\n   {\n      opus_free(st);\n      st = NULL;\n   }\n   return st;\n}\n\nstatic void smooth_fade(const opus_val16 *in1, const opus_val16 *in2,\n      opus_val16 *out, int overlap, int channels,\n      const opus_val16 *window, opus_int32 Fs)\n{\n   int i, c;\n   int inc = 48000/Fs;\n   for (c=0;c<channels;c++)\n   {\n      for (i=0;i<overlap;i++)\n      {\n         opus_val16 w = MULT16_16_Q15(window[i*inc], window[i*inc]);\n         out[i*channels+c] = SHR32(MAC16_16(MULT16_16(w,in2[i*channels+c]),\n                                   Q15ONE-w, in1[i*channels+c]), 15);\n      }\n   }\n}\n\nstatic int opus_packet_get_mode(const unsigned char *data)\n{\n   int mode;\n   if (data[0]&0x80)\n   {\n      mode = MODE_CELT_ONLY;\n   } else if ((data[0]&0x60) == 0x60)\n   {\n      mode = MODE_HYBRID;\n   } else {\n      mode = MODE_SILK_ONLY;\n   }\n   return mode;\n}\n\nstatic int opus_decode_frame(OpusDecoder *st, const unsigned char *data,\n      opus_int32 len, opus_val16 *pcm, int frame_size, int decode_fec)\n{\n   void *silk_dec;\n   CELTDecoder *celt_dec;\n   int i, silk_ret=0, celt_ret=0;\n   ec_dec dec;\n   opus_int32 silk_frame_size;\n   int pcm_silk_size;\n   VARDECL(opus_int16, pcm_silk);\n   int pcm_transition_silk_size;\n   VARDECL(opus_val16, pcm_transition_silk);\n   int pcm_transition_celt_size;\n   VARDECL(opus_val16, pcm_transition_celt);\n   opus_val16 *pcm_transition=NULL;\n   int redundant_audio_size;\n   VARDECL(opus_val16, redundant_audio);\n\n   int audiosize;\n   int mode;\n   int transition=0;\n   int start_band;\n   int redundancy=0;\n   int redundancy_bytes = 0;\n   int celt_to_silk=0;\n   int c;\n   int F2_5, F5, F10, F20;\n   const opus_val16 *window;\n   opus_uint32 redundant_rng = 0;\n   int celt_accum;\n   ALLOC_STACK;\n\n   silk_dec = (char*)st+st->silk_dec_offset;\n   celt_dec = (CELTDecoder*)((char*)st+st->celt_dec_offset);\n   F20 = st->Fs/50;\n   F10 = F20>>1;\n   F5 = F10>>1;\n   F2_5 = F5>>1;\n   if (frame_size < F2_5)\n   {\n      RESTORE_STACK;\n      return OPUS_BUFFER_TOO_SMALL;\n   }\n   /* Limit frame_size to avoid excessive stack allocations. */\n   frame_size = IMIN(frame_size, st->Fs/25*3);\n   /* Payloads of 1 (2 including ToC) or 0 trigger the PLC/DTX */\n   if (len<=1)\n   {\n      data = NULL;\n      /* In that case, don't conceal more than what the ToC says */\n      frame_size = IMIN(frame_size, st->frame_size);\n   }\n   if (data != NULL)\n   {\n      audiosize = st->frame_size;\n      mode = st->mode;\n      ec_dec_init(&dec,(unsigned char*)data,len);\n   } else {\n      audiosize = frame_size;\n      mode = st->prev_mode;\n\n      if (mode == 0)\n      {\n         /* If we haven't got any packet yet, all we can do is return zeros */\n         for (i=0;i<audiosize*st->channels;i++)\n            pcm[i] = 0;\n         RESTORE_STACK;\n         return audiosize;\n      }\n\n      /* Avoids trying to run the PLC on sizes other than 2.5 (CELT), 5 (CELT),\n         10, or 20 (e.g. 12.5 or 30 ms). */\n      if (audiosize > F20)\n      {\n         do {\n            int ret = opus_decode_frame(st, NULL, 0, pcm, IMIN(audiosize, F20), 0);\n            if (ret<0)\n            {\n               RESTORE_STACK;\n               return ret;\n            }\n            pcm += ret*st->channels;\n            audiosize -= ret;\n         } while (audiosize > 0);\n         RESTORE_STACK;\n         return frame_size;\n      } else if (audiosize < F20)\n      {\n         if (audiosize > F10)\n            audiosize = F10;\n         else if (mode != MODE_SILK_ONLY && audiosize > F5 && audiosize < F10)\n            audiosize = F5;\n      }\n   }\n\n   /* In fixed-point, we can tell CELT to do the accumulation on top of the\n      SILK PCM buffer. This saves some stack space. */\n#ifdef FIXED_POINT\n   celt_accum = (mode != MODE_CELT_ONLY) && (frame_size >= F10);\n#else\n   celt_accum = 0;\n#endif\n\n   pcm_transition_silk_size = ALLOC_NONE;\n   pcm_transition_celt_size = ALLOC_NONE;\n   if (data!=NULL && st->prev_mode > 0 && (\n       (mode == MODE_CELT_ONLY && st->prev_mode != MODE_CELT_ONLY && !st->prev_redundancy)\n    || (mode != MODE_CELT_ONLY && st->prev_mode == MODE_CELT_ONLY) )\n      )\n   {\n      transition = 1;\n      /* Decide where to allocate the stack memory for pcm_transition */\n      if (mode == MODE_CELT_ONLY)\n         pcm_transition_celt_size = F5*st->channels;\n      else\n         pcm_transition_silk_size = F5*st->channels;\n   }\n   ALLOC(pcm_transition_celt, pcm_transition_celt_size, opus_val16);\n   if (transition && mode == MODE_CELT_ONLY)\n   {\n      pcm_transition = pcm_transition_celt;\n      opus_decode_frame(st, NULL, 0, pcm_transition, IMIN(F5, audiosize), 0);\n   }\n   if (audiosize > frame_size)\n   {\n      /*fprintf(stderr, \"PCM buffer too small: %d vs %d (mode = %d)\\n\", audiosize, frame_size, mode);*/\n      RESTORE_STACK;\n      return OPUS_BAD_ARG;\n   } else {\n      frame_size = audiosize;\n   }\n\n   /* Don't allocate any memory when in CELT-only mode */\n   pcm_silk_size = (mode != MODE_CELT_ONLY && !celt_accum) ? IMAX(F10, frame_size)*st->channels : ALLOC_NONE;\n   ALLOC(pcm_silk, pcm_silk_size, opus_int16);\n\n   /* SILK processing */\n   if (mode != MODE_CELT_ONLY)\n   {\n      int lost_flag, decoded_samples;\n      opus_int16 *pcm_ptr;\n#ifdef FIXED_POINT\n      if (celt_accum)\n         pcm_ptr = pcm;\n      else\n#endif\n         pcm_ptr = pcm_silk;\n\n      if (st->prev_mode==MODE_CELT_ONLY)\n         silk_InitDecoder( silk_dec );\n\n      /* The SILK PLC cannot produce frames of less than 10 ms */\n      st->DecControl.payloadSize_ms = IMAX(10, 1000 * audiosize / st->Fs);\n\n      if (data != NULL)\n      {\n        st->DecControl.nChannelsInternal = st->stream_channels;\n        if( mode == MODE_SILK_ONLY ) {\n           if( st->bandwidth == OPUS_BANDWIDTH_NARROWBAND ) {\n              st->DecControl.internalSampleRate = 8000;\n           } else if( st->bandwidth == OPUS_BANDWIDTH_MEDIUMBAND ) {\n              st->DecControl.internalSampleRate = 12000;\n           } else if( st->bandwidth == OPUS_BANDWIDTH_WIDEBAND ) {\n              st->DecControl.internalSampleRate = 16000;\n           } else {\n              st->DecControl.internalSampleRate = 16000;\n              silk_assert( 0 );\n           }\n        } else {\n           /* Hybrid mode */\n           st->DecControl.internalSampleRate = 16000;\n        }\n     }\n\n     lost_flag = data == NULL ? 1 : 2 * decode_fec;\n     decoded_samples = 0;\n     do {\n        /* Call SILK decoder */\n        int first_frame = decoded_samples == 0;\n        silk_ret = silk_Decode( silk_dec, &st->DecControl,\n                                lost_flag, first_frame, &dec, pcm_ptr, &silk_frame_size, st->arch );\n        if( silk_ret ) {\n           if (lost_flag) {\n              /* PLC failure should not be fatal */\n              silk_frame_size = frame_size;\n              for (i=0;i<frame_size*st->channels;i++)\n                 pcm_ptr[i] = 0;\n           } else {\n             RESTORE_STACK;\n             return OPUS_INTERNAL_ERROR;\n           }\n        }\n        pcm_ptr += silk_frame_size * st->channels;\n        decoded_samples += silk_frame_size;\n      } while( decoded_samples < frame_size );\n   }\n\n   start_band = 0;\n   if (!decode_fec && mode != MODE_CELT_ONLY && data != NULL\n    && ec_tell(&dec)+17+20*(st->mode == MODE_HYBRID) <= 8*len)\n   {\n      /* Check if we have a redundant 0-8 kHz band */\n      if (mode == MODE_HYBRID)\n         redundancy = ec_dec_bit_logp(&dec, 12);\n      else\n         redundancy = 1;\n      if (redundancy)\n      {\n         celt_to_silk = ec_dec_bit_logp(&dec, 1);\n         /* redundancy_bytes will be at least two, in the non-hybrid\n            case due to the ec_tell() check above */\n         redundancy_bytes = mode==MODE_HYBRID ?\n               (opus_int32)ec_dec_uint(&dec, 256)+2 :\n               len-((ec_tell(&dec)+7)>>3);\n         len -= redundancy_bytes;\n         /* This is a sanity check. It should never happen for a valid\n            packet, so the exact behaviour is not normative. */\n         if (len*8 < ec_tell(&dec))\n         {\n            len = 0;\n            redundancy_bytes = 0;\n            redundancy = 0;\n         }\n         /* Shrink decoder because of raw bits */\n         dec.storage -= redundancy_bytes;\n      }\n   }\n   if (mode != MODE_CELT_ONLY)\n      start_band = 17;\n\n   {\n      int endband=21;\n\n      switch(st->bandwidth)\n      {\n      case OPUS_BANDWIDTH_NARROWBAND:\n         endband = 13;\n         break;\n      case OPUS_BANDWIDTH_MEDIUMBAND:\n      case OPUS_BANDWIDTH_WIDEBAND:\n         endband = 17;\n         break;\n      case OPUS_BANDWIDTH_SUPERWIDEBAND:\n         endband = 19;\n         break;\n      case OPUS_BANDWIDTH_FULLBAND:\n         endband = 21;\n         break;\n      }\n      celt_decoder_ctl(celt_dec, CELT_SET_END_BAND(endband));\n      celt_decoder_ctl(celt_dec, CELT_SET_CHANNELS(st->stream_channels));\n   }\n\n   if (redundancy)\n   {\n      transition = 0;\n      pcm_transition_silk_size=ALLOC_NONE;\n   }\n\n   ALLOC(pcm_transition_silk, pcm_transition_silk_size, opus_val16);\n\n   if (transition && mode != MODE_CELT_ONLY)\n   {\n      pcm_transition = pcm_transition_silk;\n      opus_decode_frame(st, NULL, 0, pcm_transition, IMIN(F5, audiosize), 0);\n   }\n\n   /* Only allocation memory for redundancy if/when needed */\n   redundant_audio_size = redundancy ? F5*st->channels : ALLOC_NONE;\n   ALLOC(redundant_audio, redundant_audio_size, opus_val16);\n\n   /* 5 ms redundant frame for CELT->SILK*/\n   if (redundancy && celt_to_silk)\n   {\n      celt_decoder_ctl(celt_dec, CELT_SET_START_BAND(0));\n      celt_decode_with_ec(celt_dec, data+len, redundancy_bytes,\n                          redundant_audio, F5, NULL, 0);\n      celt_decoder_ctl(celt_dec, OPUS_GET_FINAL_RANGE(&redundant_rng));\n   }\n\n   /* MUST be after PLC */\n   celt_decoder_ctl(celt_dec, CELT_SET_START_BAND(start_band));\n\n   if (mode != MODE_SILK_ONLY)\n   {\n      int celt_frame_size = IMIN(F20, frame_size);\n      /* Make sure to discard any previous CELT state */\n      if (mode != st->prev_mode && st->prev_mode > 0 && !st->prev_redundancy)\n         celt_decoder_ctl(celt_dec, OPUS_RESET_STATE);\n      /* Decode CELT */\n      celt_ret = celt_decode_with_ec(celt_dec, decode_fec ? NULL : data,\n                                     len, pcm, celt_frame_size, &dec, celt_accum);\n   } else {\n      unsigned char silence[2] = {0xFF, 0xFF};\n      if (!celt_accum)\n      {\n         for (i=0;i<frame_size*st->channels;i++)\n            pcm[i] = 0;\n      }\n      /* For hybrid -> SILK transitions, we let the CELT MDCT\n         do a fade-out by decoding a silence frame */\n      if (st->prev_mode == MODE_HYBRID && !(redundancy && celt_to_silk && st->prev_redundancy) )\n      {\n         celt_decoder_ctl(celt_dec, CELT_SET_START_BAND(0));\n         celt_decode_with_ec(celt_dec, silence, 2, pcm, F2_5, NULL, celt_accum);\n      }\n   }\n\n   if (mode != MODE_CELT_ONLY && !celt_accum)\n   {\n#ifdef FIXED_POINT\n      for (i=0;i<frame_size*st->channels;i++)\n         pcm[i] = SAT16(ADD32(pcm[i], pcm_silk[i]));\n#else\n      for (i=0;i<frame_size*st->channels;i++)\n         pcm[i] = pcm[i] + (opus_val16)((1.f/32768.f)*pcm_silk[i]);\n#endif\n   }\n\n   {\n      const CELTMode *celt_mode;\n      celt_decoder_ctl(celt_dec, CELT_GET_MODE(&celt_mode));\n      window = celt_mode->window;\n   }\n\n   /* 5 ms redundant frame for SILK->CELT */\n   if (redundancy && !celt_to_silk)\n   {\n      celt_decoder_ctl(celt_dec, OPUS_RESET_STATE);\n      celt_decoder_ctl(celt_dec, CELT_SET_START_BAND(0));\n\n      celt_decode_with_ec(celt_dec, data+len, redundancy_bytes, redundant_audio, F5, NULL, 0);\n      celt_decoder_ctl(celt_dec, OPUS_GET_FINAL_RANGE(&redundant_rng));\n      smooth_fade(pcm+st->channels*(frame_size-F2_5), redundant_audio+st->channels*F2_5,\n                  pcm+st->channels*(frame_size-F2_5), F2_5, st->channels, window, st->Fs);\n   }\n   if (redundancy && celt_to_silk)\n   {\n      for (c=0;c<st->channels;c++)\n      {\n         for (i=0;i<F2_5;i++)\n            pcm[st->channels*i+c] = redundant_audio[st->channels*i+c];\n      }\n      smooth_fade(redundant_audio+st->channels*F2_5, pcm+st->channels*F2_5,\n                  pcm+st->channels*F2_5, F2_5, st->channels, window, st->Fs);\n   }\n   if (transition)\n   {\n      if (audiosize >= F5)\n      {\n         for (i=0;i<st->channels*F2_5;i++)\n            pcm[i] = pcm_transition[i];\n         smooth_fade(pcm_transition+st->channels*F2_5, pcm+st->channels*F2_5,\n                     pcm+st->channels*F2_5, F2_5,\n                     st->channels, window, st->Fs);\n      } else {\n         /* Not enough time to do a clean transition, but we do it anyway\n            This will not preserve amplitude perfectly and may introduce\n            a bit of temporal aliasing, but it shouldn't be too bad and\n            that's pretty much the best we can do. In any case, generating this\n            transition it pretty silly in the first place */\n         smooth_fade(pcm_transition, pcm,\n                     pcm, F2_5,\n                     st->channels, window, st->Fs);\n      }\n   }\n\n   if(st->decode_gain)\n   {\n      opus_val32 gain;\n      gain = celt_exp2(MULT16_16_P15(QCONST16(6.48814081e-4f, 25), st->decode_gain));\n      for (i=0;i<frame_size*st->channels;i++)\n      {\n         opus_val32 x;\n         x = MULT16_32_P16(pcm[i],gain);\n         pcm[i] = SATURATE(x, 32767);\n      }\n   }\n\n   if (len <= 1)\n      st->rangeFinal = 0;\n   else\n      st->rangeFinal = dec.rng ^ redundant_rng;\n\n   st->prev_mode = mode;\n   st->prev_redundancy = redundancy && !celt_to_silk;\n\n   if (celt_ret>=0)\n   {\n      if (OPUS_CHECK_ARRAY(pcm, audiosize*st->channels))\n         OPUS_PRINT_INT(audiosize);\n   }\n\n   RESTORE_STACK;\n   return celt_ret < 0 ? celt_ret : audiosize;\n\n}\n\nint opus_decode_native(OpusDecoder *st, const unsigned char *data,\n      opus_int32 len, opus_val16 *pcm, int frame_size, int decode_fec,\n      int self_delimited, opus_int32 *packet_offset, int soft_clip)\n{\n   int i, nb_samples;\n   int count, offset;\n   unsigned char toc;\n   int packet_frame_size, packet_bandwidth, packet_mode, packet_stream_channels;\n   /* 48 x 2.5 ms = 120 ms */\n   opus_int16 size[48];\n   if (decode_fec<0 || decode_fec>1)\n      return OPUS_BAD_ARG;\n   /* For FEC/PLC, frame_size has to be to have a multiple of 2.5 ms */\n   if ((decode_fec || len==0 || data==NULL) && frame_size%(st->Fs/400)!=0)\n      return OPUS_BAD_ARG;\n   if (len==0 || data==NULL)\n   {\n      int pcm_count=0;\n      do {\n         int ret;\n         ret = opus_decode_frame(st, NULL, 0, pcm+pcm_count*st->channels, frame_size-pcm_count, 0);\n         if (ret<0)\n            return ret;\n         pcm_count += ret;\n      } while (pcm_count < frame_size);\n      celt_assert(pcm_count == frame_size);\n      if (OPUS_CHECK_ARRAY(pcm, pcm_count*st->channels))\n         OPUS_PRINT_INT(pcm_count);\n      st->last_packet_duration = pcm_count;\n      return pcm_count;\n   } else if (len<0)\n      return OPUS_BAD_ARG;\n\n   packet_mode = opus_packet_get_mode(data);\n   packet_bandwidth = opus_packet_get_bandwidth(data);\n   packet_frame_size = opus_packet_get_samples_per_frame(data, st->Fs);\n   packet_stream_channels = opus_packet_get_nb_channels(data);\n\n   count = opus_packet_parse_impl(data, len, self_delimited, &toc, NULL,\n                                  size, &offset, packet_offset);\n   if (count<0)\n      return count;\n\n   data += offset;\n\n   if (decode_fec)\n   {\n      int duration_copy;\n      int ret;\n      /* If no FEC can be present, run the PLC (recursive call) */\n      if (frame_size < packet_frame_size || packet_mode == MODE_CELT_ONLY || st->mode == MODE_CELT_ONLY)\n         return opus_decode_native(st, NULL, 0, pcm, frame_size, 0, 0, NULL, soft_clip);\n      /* Otherwise, run the PLC on everything except the size for which we might have FEC */\n      duration_copy = st->last_packet_duration;\n      if (frame_size-packet_frame_size!=0)\n      {\n         ret = opus_decode_native(st, NULL, 0, pcm, frame_size-packet_frame_size, 0, 0, NULL, soft_clip);\n         if (ret<0)\n         {\n            st->last_packet_duration = duration_copy;\n            return ret;\n         }\n         celt_assert(ret==frame_size-packet_frame_size);\n      }\n      /* Complete with FEC */\n      st->mode = packet_mode;\n      st->bandwidth = packet_bandwidth;\n      st->frame_size = packet_frame_size;\n      st->stream_channels = packet_stream_channels;\n      ret = opus_decode_frame(st, data, size[0], pcm+st->channels*(frame_size-packet_frame_size),\n            packet_frame_size, 1);\n      if (ret<0)\n         return ret;\n      else {\n         if (OPUS_CHECK_ARRAY(pcm, frame_size*st->channels))\n            OPUS_PRINT_INT(frame_size);\n         st->last_packet_duration = frame_size;\n         return frame_size;\n      }\n   }\n\n   if (count*packet_frame_size > frame_size)\n      return OPUS_BUFFER_TOO_SMALL;\n\n   /* Update the state as the last step to avoid updating it on an invalid packet */\n   st->mode = packet_mode;\n   st->bandwidth = packet_bandwidth;\n   st->frame_size = packet_frame_size;\n   st->stream_channels = packet_stream_channels;\n\n   nb_samples=0;\n   for (i=0;i<count;i++)\n   {\n      int ret;\n      ret = opus_decode_frame(st, data, size[i], pcm+nb_samples*st->channels, frame_size-nb_samples, 0);\n      if (ret<0)\n         return ret;\n      celt_assert(ret==packet_frame_size);\n      data += size[i];\n      nb_samples += ret;\n   }\n   st->last_packet_duration = nb_samples;\n   if (OPUS_CHECK_ARRAY(pcm, nb_samples*st->channels))\n      OPUS_PRINT_INT(nb_samples);\n#ifndef FIXED_POINT\n   if (soft_clip)\n      opus_pcm_soft_clip(pcm, nb_samples, st->channels, st->softclip_mem);\n   else\n      st->softclip_mem[0]=st->softclip_mem[1]=0;\n#endif\n   return nb_samples;\n}\n\n#ifdef FIXED_POINT\n\nint opus_decode(OpusDecoder *st, const unsigned char *data,\n      opus_int32 len, opus_val16 *pcm, int frame_size, int decode_fec)\n{\n   if(frame_size<=0)\n      return OPUS_BAD_ARG;\n   return opus_decode_native(st, data, len, pcm, frame_size, decode_fec, 0, NULL, 0);\n}\n\n#ifndef DISABLE_FLOAT_API\nint opus_decode_float(OpusDecoder *st, const unsigned char *data,\n      opus_int32 len, float *pcm, int frame_size, int decode_fec)\n{\n   VARDECL(opus_int16, out);\n   int ret, i;\n   int nb_samples;\n   ALLOC_STACK;\n\n   if(frame_size<=0)\n   {\n      RESTORE_STACK;\n      return OPUS_BAD_ARG;\n   }\n   if (data != NULL && len > 0 && !decode_fec)\n   {\n      nb_samples = opus_decoder_get_nb_samples(st, data, len);\n      if (nb_samples>0)\n         frame_size = IMIN(frame_size, nb_samples);\n      else\n         return OPUS_INVALID_PACKET;\n   }\n   ALLOC(out, frame_size*st->channels, opus_int16);\n\n   ret = opus_decode_native(st, data, len, out, frame_size, decode_fec, 0, NULL, 0);\n   if (ret > 0)\n   {\n      for (i=0;i<ret*st->channels;i++)\n         pcm[i] = (1.f/32768.f)*(out[i]);\n   }\n   RESTORE_STACK;\n   return ret;\n}\n#endif\n\n\n#else\nint opus_decode(OpusDecoder *st, const unsigned char *data,\n      opus_int32 len, opus_int16 *pcm, int frame_size, int decode_fec)\n{\n   VARDECL(float, out);\n   int ret, i;\n   int nb_samples;\n   ALLOC_STACK;\n\n   if(frame_size<=0)\n   {\n      RESTORE_STACK;\n      return OPUS_BAD_ARG;\n   }\n\n   if (data != NULL && len > 0 && !decode_fec)\n   {\n      nb_samples = opus_decoder_get_nb_samples(st, data, len);\n      if (nb_samples>0)\n         frame_size = IMIN(frame_size, nb_samples);\n      else\n         return OPUS_INVALID_PACKET;\n   }\n   ALLOC(out, frame_size*st->channels, float);\n\n   ret = opus_decode_native(st, data, len, out, frame_size, decode_fec, 0, NULL, 1);\n   if (ret > 0)\n   {\n      for (i=0;i<ret*st->channels;i++)\n         pcm[i] = FLOAT2INT16(out[i]);\n   }\n   RESTORE_STACK;\n   return ret;\n}\n\nint opus_decode_float(OpusDecoder *st, const unsigned char *data,\n      opus_int32 len, opus_val16 *pcm, int frame_size, int decode_fec)\n{\n   if(frame_size<=0)\n      return OPUS_BAD_ARG;\n   return opus_decode_native(st, data, len, pcm, frame_size, decode_fec, 0, NULL, 0);\n}\n\n#endif\n\nint opus_decoder_ctl(OpusDecoder *st, int request, ...)\n{\n   int ret = OPUS_OK;\n   va_list ap;\n   void *silk_dec;\n   CELTDecoder *celt_dec;\n\n   silk_dec = (char*)st+st->silk_dec_offset;\n   celt_dec = (CELTDecoder*)((char*)st+st->celt_dec_offset);\n\n\n   va_start(ap, request);\n\n   switch (request)\n   {\n   case OPUS_GET_BANDWIDTH_REQUEST:\n   {\n      opus_int32 *value = va_arg(ap, opus_int32*);\n      if (!value)\n      {\n         goto bad_arg;\n      }\n      *value = st->bandwidth;\n   }\n   break;\n   case OPUS_GET_FINAL_RANGE_REQUEST:\n   {\n      opus_uint32 *value = va_arg(ap, opus_uint32*);\n      if (!value)\n      {\n         goto bad_arg;\n      }\n      *value = st->rangeFinal;\n   }\n   break;\n   case OPUS_RESET_STATE:\n   {\n      OPUS_CLEAR((char*)&st->OPUS_DECODER_RESET_START,\n            sizeof(OpusDecoder)-\n            ((char*)&st->OPUS_DECODER_RESET_START - (char*)st));\n\n      celt_decoder_ctl(celt_dec, OPUS_RESET_STATE);\n      silk_InitDecoder( silk_dec );\n      st->stream_channels = st->channels;\n      st->frame_size = st->Fs/400;\n   }\n   break;\n   case OPUS_GET_SAMPLE_RATE_REQUEST:\n   {\n      opus_int32 *value = va_arg(ap, opus_int32*);\n      if (!value)\n      {\n         goto bad_arg;\n      }\n      *value = st->Fs;\n   }\n   break;\n   case OPUS_GET_PITCH_REQUEST:\n   {\n      opus_int32 *value = va_arg(ap, opus_int32*);\n      if (!value)\n      {\n         goto bad_arg;\n      }\n      if (st->prev_mode == MODE_CELT_ONLY)\n         celt_decoder_ctl(celt_dec, OPUS_GET_PITCH(value));\n      else\n         *value = st->DecControl.prevPitchLag;\n   }\n   break;\n   case OPUS_GET_GAIN_REQUEST:\n   {\n      opus_int32 *value = va_arg(ap, opus_int32*);\n      if (!value)\n      {\n         goto bad_arg;\n      }\n      *value = st->decode_gain;\n   }\n   break;\n   case OPUS_SET_GAIN_REQUEST:\n   {\n       opus_int32 value = va_arg(ap, opus_int32);\n       if (value<-32768 || value>32767)\n       {\n          goto bad_arg;\n       }\n       st->decode_gain = value;\n   }\n   break;\n   case OPUS_GET_LAST_PACKET_DURATION_REQUEST:\n   {\n      opus_int32 *value = va_arg(ap, opus_int32*);\n      if (!value)\n      {\n         goto bad_arg;\n      }\n      *value = st->last_packet_duration;\n   }\n   break;\n   case OPUS_SET_PHASE_INVERSION_DISABLED_REQUEST:\n   {\n       opus_int32 value = va_arg(ap, opus_int32);\n       if(value<0 || value>1)\n       {\n          goto bad_arg;\n       }\n       celt_decoder_ctl(celt_dec, OPUS_SET_PHASE_INVERSION_DISABLED(value));\n   }\n   break;\n   case OPUS_GET_PHASE_INVERSION_DISABLED_REQUEST:\n   {\n       opus_int32 *value = va_arg(ap, opus_int32*);\n       if (!value)\n       {\n          goto bad_arg;\n       }\n       celt_decoder_ctl(celt_dec, OPUS_GET_PHASE_INVERSION_DISABLED(value));\n   }\n   break;\n   default:\n      /*fprintf(stderr, \"unknown opus_decoder_ctl() request: %d\", request);*/\n      ret = OPUS_UNIMPLEMENTED;\n      break;\n   }\n\n   va_end(ap);\n   return ret;\nbad_arg:\n   va_end(ap);\n   return OPUS_BAD_ARG;\n}\n\nvoid opus_decoder_destroy(OpusDecoder *st)\n{\n   opus_free(st);\n}\n\n\nint opus_packet_get_bandwidth(const unsigned char *data)\n{\n   int bandwidth;\n   if (data[0]&0x80)\n   {\n      bandwidth = OPUS_BANDWIDTH_MEDIUMBAND + ((data[0]>>5)&0x3);\n      if (bandwidth == OPUS_BANDWIDTH_MEDIUMBAND)\n         bandwidth = OPUS_BANDWIDTH_NARROWBAND;\n   } else if ((data[0]&0x60) == 0x60)\n   {\n      bandwidth = (data[0]&0x10) ? OPUS_BANDWIDTH_FULLBAND :\n                                   OPUS_BANDWIDTH_SUPERWIDEBAND;\n   } else {\n      bandwidth = OPUS_BANDWIDTH_NARROWBAND + ((data[0]>>5)&0x3);\n   }\n   return bandwidth;\n}\n\nint opus_packet_get_nb_channels(const unsigned char *data)\n{\n   return (data[0]&0x4) ? 2 : 1;\n}\n\nint opus_packet_get_nb_frames(const unsigned char packet[], opus_int32 len)\n{\n   int count;\n   if (len<1)\n      return OPUS_BAD_ARG;\n   count = packet[0]&0x3;\n   if (count==0)\n      return 1;\n   else if (count!=3)\n      return 2;\n   else if (len<2)\n      return OPUS_INVALID_PACKET;\n   else\n      return packet[1]&0x3F;\n}\n\nint opus_packet_get_nb_samples(const unsigned char packet[], opus_int32 len,\n      opus_int32 Fs)\n{\n   int samples;\n   int count = opus_packet_get_nb_frames(packet, len);\n\n   if (count<0)\n      return count;\n\n   samples = count*opus_packet_get_samples_per_frame(packet, Fs);\n   /* Can't have more than 120 ms */\n   if (samples*25 > Fs*3)\n      return OPUS_INVALID_PACKET;\n   else\n      return samples;\n}\n\nint opus_decoder_get_nb_samples(const OpusDecoder *dec,\n      const unsigned char packet[], opus_int32 len)\n{\n   return opus_packet_get_nb_samples(packet, len, dec->Fs);\n}\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/opus_demo.c",
    "content": "/* Copyright (c) 2007-2008 CSIRO\n   Copyright (c) 2007-2009 Xiph.Org Foundation\n   Written by Jean-Marc Valin */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <math.h>\n#include <string.h>\n#include \"opus.h\"\n#include \"debug.h\"\n#include \"opus_types.h\"\n#include \"opus_private.h\"\n#include \"opus_multistream.h\"\n\n#define MAX_PACKET 1500\n\nvoid print_usage( char* argv[] )\n{\n    fprintf(stderr, \"Usage: %s [-e] <application> <sampling rate (Hz)> <channels (1/2)> \"\n        \"<bits per second>  [options] <input> <output>\\n\", argv[0]);\n    fprintf(stderr, \"       %s -d <sampling rate (Hz)> <channels (1/2)> \"\n        \"[options] <input> <output>\\n\\n\", argv[0]);\n    fprintf(stderr, \"application: voip | audio | restricted-lowdelay\\n\" );\n    fprintf(stderr, \"options:\\n\" );\n    fprintf(stderr, \"-e                   : only runs the encoder (output the bit-stream)\\n\" );\n    fprintf(stderr, \"-d                   : only runs the decoder (reads the bit-stream as input)\\n\" );\n    fprintf(stderr, \"-cbr                 : enable constant bitrate; default: variable bitrate\\n\" );\n    fprintf(stderr, \"-cvbr                : enable constrained variable bitrate; default: unconstrained\\n\" );\n    fprintf(stderr, \"-delayed-decision    : use look-ahead for speech/music detection (experts only); default: disabled\\n\" );\n    fprintf(stderr, \"-bandwidth <NB|MB|WB|SWB|FB> : audio bandwidth (from narrowband to fullband); default: sampling rate\\n\" );\n    fprintf(stderr, \"-framesize <2.5|5|10|20|40|60|80|100|120> : frame size in ms; default: 20 \\n\" );\n    fprintf(stderr, \"-max_payload <bytes> : maximum payload size in bytes, default: 1024\\n\" );\n    fprintf(stderr, \"-complexity <comp>   : complexity, 0 (lowest) ... 10 (highest); default: 10\\n\" );\n    fprintf(stderr, \"-inbandfec           : enable SILK inband FEC\\n\" );\n    fprintf(stderr, \"-forcemono           : force mono encoding, even for stereo input\\n\" );\n    fprintf(stderr, \"-dtx                 : enable SILK DTX\\n\" );\n    fprintf(stderr, \"-loss <perc>         : simulate packet loss, in percent (0-100); default: 0\\n\" );\n}\n\nstatic void int_to_char(opus_uint32 i, unsigned char ch[4])\n{\n    ch[0] = i>>24;\n    ch[1] = (i>>16)&0xFF;\n    ch[2] = (i>>8)&0xFF;\n    ch[3] = i&0xFF;\n}\n\nstatic opus_uint32 char_to_int(unsigned char ch[4])\n{\n    return ((opus_uint32)ch[0]<<24) | ((opus_uint32)ch[1]<<16)\n         | ((opus_uint32)ch[2]<< 8) |  (opus_uint32)ch[3];\n}\n\nstatic void check_encoder_option(int decode_only, const char *opt)\n{\n   if (decode_only)\n   {\n      fprintf(stderr, \"option %s is only for encoding\\n\", opt);\n      exit(EXIT_FAILURE);\n   }\n}\n\nstatic const int silk8_test[][4] = {\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_NARROWBAND, 960*3, 1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_NARROWBAND, 960*2, 1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_NARROWBAND, 960,   1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_NARROWBAND, 480,   1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_NARROWBAND, 960*3, 2},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_NARROWBAND, 960*2, 2},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_NARROWBAND, 960,   2},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_NARROWBAND, 480,   2}\n};\n\nstatic const int silk12_test[][4] = {\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_MEDIUMBAND, 960*3, 1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_MEDIUMBAND, 960*2, 1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_MEDIUMBAND, 960,   1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_MEDIUMBAND, 480,   1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_MEDIUMBAND, 960*3, 2},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_MEDIUMBAND, 960*2, 2},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_MEDIUMBAND, 960,   2},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_MEDIUMBAND, 480,   2}\n};\n\nstatic const int silk16_test[][4] = {\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_WIDEBAND, 960*3, 1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_WIDEBAND, 960*2, 1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_WIDEBAND, 960,   1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_WIDEBAND, 480,   1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_WIDEBAND, 960*3, 2},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_WIDEBAND, 960*2, 2},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_WIDEBAND, 960,   2},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_WIDEBAND, 480,   2}\n};\n\nstatic const int hybrid24_test[][4] = {\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_SUPERWIDEBAND, 960, 1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_SUPERWIDEBAND, 480, 1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_SUPERWIDEBAND, 960, 2},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_SUPERWIDEBAND, 480, 2}\n};\n\nstatic const int hybrid48_test[][4] = {\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_FULLBAND, 960, 1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_FULLBAND, 480, 1},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_FULLBAND, 960, 2},\n      {MODE_SILK_ONLY, OPUS_BANDWIDTH_FULLBAND, 480, 2}\n};\n\nstatic const int celt_test[][4] = {\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_FULLBAND,      960, 1},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_SUPERWIDEBAND, 960, 1},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_WIDEBAND,      960, 1},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_NARROWBAND,    960, 1},\n\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_FULLBAND,      480, 1},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_SUPERWIDEBAND, 480, 1},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_WIDEBAND,      480, 1},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_NARROWBAND,    480, 1},\n\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_FULLBAND,      240, 1},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_SUPERWIDEBAND, 240, 1},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_WIDEBAND,      240, 1},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_NARROWBAND,    240, 1},\n\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_FULLBAND,      120, 1},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_SUPERWIDEBAND, 120, 1},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_WIDEBAND,      120, 1},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_NARROWBAND,    120, 1},\n\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_FULLBAND,      960, 2},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_SUPERWIDEBAND, 960, 2},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_WIDEBAND,      960, 2},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_NARROWBAND,    960, 2},\n\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_FULLBAND,      480, 2},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_SUPERWIDEBAND, 480, 2},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_WIDEBAND,      480, 2},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_NARROWBAND,    480, 2},\n\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_FULLBAND,      240, 2},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_SUPERWIDEBAND, 240, 2},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_WIDEBAND,      240, 2},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_NARROWBAND,    240, 2},\n\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_FULLBAND,      120, 2},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_SUPERWIDEBAND, 120, 2},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_WIDEBAND,      120, 2},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_NARROWBAND,    120, 2},\n\n};\n\nstatic const int celt_hq_test[][4] = {\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_FULLBAND,      960, 2},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_FULLBAND,      480, 2},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_FULLBAND,      240, 2},\n      {MODE_CELT_ONLY, OPUS_BANDWIDTH_FULLBAND,      120, 2},\n};\n\n#if 0 /* This is a hack that replaces the normal encoder/decoder with the multistream version */\n#define OpusEncoder OpusMSEncoder\n#define OpusDecoder OpusMSDecoder\n#define opus_encode opus_multistream_encode\n#define opus_decode opus_multistream_decode\n#define opus_encoder_ctl opus_multistream_encoder_ctl\n#define opus_decoder_ctl opus_multistream_decoder_ctl\n#define opus_encoder_create ms_opus_encoder_create\n#define opus_decoder_create ms_opus_decoder_create\n#define opus_encoder_destroy opus_multistream_encoder_destroy\n#define opus_decoder_destroy opus_multistream_decoder_destroy\n\nstatic OpusEncoder *ms_opus_encoder_create(opus_int32 Fs, int channels, int application, int *error)\n{\n   int streams, coupled_streams;\n   unsigned char mapping[256];\n   return (OpusEncoder *)opus_multistream_surround_encoder_create(Fs, channels, 1, &streams, &coupled_streams, mapping, application, error);\n}\nstatic OpusDecoder *ms_opus_decoder_create(opus_int32 Fs, int channels, int *error)\n{\n   int streams;\n   int coupled_streams;\n   unsigned char mapping[256]={0,1};\n   streams = 1;\n   coupled_streams = channels==2;\n   return (OpusDecoder *)opus_multistream_decoder_create(Fs, channels, streams, coupled_streams, mapping, error);\n}\n#endif\n\nint main(int argc, char *argv[])\n{\n    int err;\n    char *inFile, *outFile;\n    FILE *fin, *fout;\n    OpusEncoder *enc=NULL;\n    OpusDecoder *dec=NULL;\n    int args;\n    int len[2];\n    int frame_size, channels;\n    opus_int32 bitrate_bps=0;\n    unsigned char *data[2];\n    unsigned char *fbytes;\n    opus_int32 sampling_rate;\n    int use_vbr;\n    int max_payload_bytes;\n    int complexity;\n    int use_inbandfec;\n    int use_dtx;\n    int forcechannels;\n    int cvbr = 0;\n    int packet_loss_perc;\n    opus_int32 count=0, count_act=0;\n    int k;\n    opus_int32 skip=0;\n    int stop=0;\n    short *in, *out;\n    int application=OPUS_APPLICATION_AUDIO;\n    double bits=0.0, bits_max=0.0, bits_act=0.0, bits2=0.0, nrg;\n    double tot_samples=0;\n    opus_uint64 tot_in, tot_out;\n    int bandwidth=OPUS_AUTO;\n    const char *bandwidth_string;\n    int lost = 0, lost_prev = 1;\n    int toggle = 0;\n    opus_uint32 enc_final_range[2];\n    opus_uint32 dec_final_range;\n    int encode_only=0, decode_only=0;\n    int max_frame_size = 48000*2;\n    size_t num_read;\n    int curr_read=0;\n    int sweep_bps = 0;\n    int random_framesize=0, newsize=0, delayed_celt=0;\n    int sweep_max=0, sweep_min=0;\n    int random_fec=0;\n    const int (*mode_list)[4]=NULL;\n    int nb_modes_in_list=0;\n    int curr_mode=0;\n    int curr_mode_count=0;\n    int mode_switch_time = 48000;\n    int nb_encoded=0;\n    int remaining=0;\n    int variable_duration=OPUS_FRAMESIZE_ARG;\n    int delayed_decision=0;\n\n    if (argc < 5 )\n    {\n       print_usage( argv );\n       return EXIT_FAILURE;\n    }\n\n    tot_in=tot_out=0;\n    fprintf(stderr, \"%s\\n\", opus_get_version_string());\n\n    args = 1;\n    if (strcmp(argv[args], \"-e\")==0)\n    {\n        encode_only = 1;\n        args++;\n    } else if (strcmp(argv[args], \"-d\")==0)\n    {\n        decode_only = 1;\n        args++;\n    }\n    if (!decode_only && argc < 7 )\n    {\n       print_usage( argv );\n       return EXIT_FAILURE;\n    }\n\n    if (!decode_only)\n    {\n       if (strcmp(argv[args], \"voip\")==0)\n          application = OPUS_APPLICATION_VOIP;\n       else if (strcmp(argv[args], \"restricted-lowdelay\")==0)\n          application = OPUS_APPLICATION_RESTRICTED_LOWDELAY;\n       else if (strcmp(argv[args], \"audio\")!=0) {\n          fprintf(stderr, \"unknown application: %s\\n\", argv[args]);\n          print_usage(argv);\n          return EXIT_FAILURE;\n       }\n       args++;\n    }\n    sampling_rate = (opus_int32)atol(argv[args]);\n    args++;\n\n    if (sampling_rate != 8000 && sampling_rate != 12000\n     && sampling_rate != 16000 && sampling_rate != 24000\n     && sampling_rate != 48000)\n    {\n        fprintf(stderr, \"Supported sampling rates are 8000, 12000, \"\n                \"16000, 24000 and 48000.\\n\");\n        return EXIT_FAILURE;\n    }\n    frame_size = sampling_rate/50;\n\n    channels = atoi(argv[args]);\n    args++;\n\n    if (channels < 1 || channels > 2)\n    {\n        fprintf(stderr, \"Opus_demo supports only 1 or 2 channels.\\n\");\n        return EXIT_FAILURE;\n    }\n\n    if (!decode_only)\n    {\n       bitrate_bps = (opus_int32)atol(argv[args]);\n       args++;\n    }\n\n    /* defaults: */\n    use_vbr = 1;\n    max_payload_bytes = MAX_PACKET;\n    complexity = 10;\n    use_inbandfec = 0;\n    forcechannels = OPUS_AUTO;\n    use_dtx = 0;\n    packet_loss_perc = 0;\n\n    while( args < argc - 2 ) {\n        /* process command line options */\n        if( strcmp( argv[ args ], \"-cbr\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-cbr\");\n            use_vbr = 0;\n            args++;\n        } else if( strcmp( argv[ args ], \"-bandwidth\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-bandwidth\");\n            if (strcmp(argv[ args + 1 ], \"NB\")==0)\n                bandwidth = OPUS_BANDWIDTH_NARROWBAND;\n            else if (strcmp(argv[ args + 1 ], \"MB\")==0)\n                bandwidth = OPUS_BANDWIDTH_MEDIUMBAND;\n            else if (strcmp(argv[ args + 1 ], \"WB\")==0)\n                bandwidth = OPUS_BANDWIDTH_WIDEBAND;\n            else if (strcmp(argv[ args + 1 ], \"SWB\")==0)\n                bandwidth = OPUS_BANDWIDTH_SUPERWIDEBAND;\n            else if (strcmp(argv[ args + 1 ], \"FB\")==0)\n                bandwidth = OPUS_BANDWIDTH_FULLBAND;\n            else {\n                fprintf(stderr, \"Unknown bandwidth %s. \"\n                                \"Supported are NB, MB, WB, SWB, FB.\\n\",\n                                argv[ args + 1 ]);\n                return EXIT_FAILURE;\n            }\n            args += 2;\n        } else if( strcmp( argv[ args ], \"-framesize\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-framesize\");\n            if (strcmp(argv[ args + 1 ], \"2.5\")==0)\n                frame_size = sampling_rate/400;\n            else if (strcmp(argv[ args + 1 ], \"5\")==0)\n                frame_size = sampling_rate/200;\n            else if (strcmp(argv[ args + 1 ], \"10\")==0)\n                frame_size = sampling_rate/100;\n            else if (strcmp(argv[ args + 1 ], \"20\")==0)\n                frame_size = sampling_rate/50;\n            else if (strcmp(argv[ args + 1 ], \"40\")==0)\n                frame_size = sampling_rate/25;\n            else if (strcmp(argv[ args + 1 ], \"60\")==0)\n                frame_size = 3*sampling_rate/50;\n            else if (strcmp(argv[ args + 1 ], \"80\")==0)\n                frame_size = 4*sampling_rate/50;\n            else if (strcmp(argv[ args + 1 ], \"100\")==0)\n                frame_size = 5*sampling_rate/50;\n            else if (strcmp(argv[ args + 1 ], \"120\")==0)\n                frame_size = 6*sampling_rate/50;\n            else {\n                fprintf(stderr, \"Unsupported frame size: %s ms. \"\n                                \"Supported are 2.5, 5, 10, 20, 40, 60, 80, 100, 120.\\n\",\n                                argv[ args + 1 ]);\n                return EXIT_FAILURE;\n            }\n            args += 2;\n        } else if( strcmp( argv[ args ], \"-max_payload\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-max_payload\");\n            max_payload_bytes = atoi( argv[ args + 1 ] );\n            args += 2;\n        } else if( strcmp( argv[ args ], \"-complexity\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-complexity\");\n            complexity = atoi( argv[ args + 1 ] );\n            args += 2;\n        } else if( strcmp( argv[ args ], \"-inbandfec\" ) == 0 ) {\n            use_inbandfec = 1;\n            args++;\n        } else if( strcmp( argv[ args ], \"-forcemono\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-forcemono\");\n            forcechannels = 1;\n            args++;\n        } else if( strcmp( argv[ args ], \"-cvbr\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-cvbr\");\n            cvbr = 1;\n            args++;\n        } else if( strcmp( argv[ args ], \"-delayed-decision\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-delayed-decision\");\n            delayed_decision = 1;\n            args++;\n        } else if( strcmp( argv[ args ], \"-dtx\") == 0 ) {\n            check_encoder_option(decode_only, \"-dtx\");\n            use_dtx = 1;\n            args++;\n        } else if( strcmp( argv[ args ], \"-loss\" ) == 0 ) {\n            packet_loss_perc = atoi( argv[ args + 1 ] );\n            args += 2;\n        } else if( strcmp( argv[ args ], \"-sweep\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-sweep\");\n            sweep_bps = atoi( argv[ args + 1 ] );\n            args += 2;\n        } else if( strcmp( argv[ args ], \"-random_framesize\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-random_framesize\");\n            random_framesize = 1;\n            args++;\n        } else if( strcmp( argv[ args ], \"-sweep_max\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-sweep_max\");\n            sweep_max = atoi( argv[ args + 1 ] );\n            args += 2;\n        } else if( strcmp( argv[ args ], \"-random_fec\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-random_fec\");\n            random_fec = 1;\n            args++;\n        } else if( strcmp( argv[ args ], \"-silk8k_test\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-silk8k_test\");\n            mode_list = silk8_test;\n            nb_modes_in_list = 8;\n            args++;\n        } else if( strcmp( argv[ args ], \"-silk12k_test\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-silk12k_test\");\n            mode_list = silk12_test;\n            nb_modes_in_list = 8;\n            args++;\n        } else if( strcmp( argv[ args ], \"-silk16k_test\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-silk16k_test\");\n            mode_list = silk16_test;\n            nb_modes_in_list = 8;\n            args++;\n        } else if( strcmp( argv[ args ], \"-hybrid24k_test\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-hybrid24k_test\");\n            mode_list = hybrid24_test;\n            nb_modes_in_list = 4;\n            args++;\n        } else if( strcmp( argv[ args ], \"-hybrid48k_test\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-hybrid48k_test\");\n            mode_list = hybrid48_test;\n            nb_modes_in_list = 4;\n            args++;\n        } else if( strcmp( argv[ args ], \"-celt_test\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-celt_test\");\n            mode_list = celt_test;\n            nb_modes_in_list = 32;\n            args++;\n        } else if( strcmp( argv[ args ], \"-celt_hq_test\" ) == 0 ) {\n            check_encoder_option(decode_only, \"-celt_hq_test\");\n            mode_list = celt_hq_test;\n            nb_modes_in_list = 4;\n            args++;\n        } else {\n            printf( \"Error: unrecognized setting: %s\\n\\n\", argv[ args ] );\n            print_usage( argv );\n            return EXIT_FAILURE;\n        }\n    }\n\n    if (sweep_max)\n       sweep_min = bitrate_bps;\n\n    if (max_payload_bytes < 0 || max_payload_bytes > MAX_PACKET)\n    {\n        fprintf (stderr, \"max_payload_bytes must be between 0 and %d\\n\",\n                          MAX_PACKET);\n        return EXIT_FAILURE;\n    }\n\n    inFile = argv[argc-2];\n    fin = fopen(inFile, \"rb\");\n    if (!fin)\n    {\n        fprintf (stderr, \"Could not open input file %s\\n\", argv[argc-2]);\n        return EXIT_FAILURE;\n    }\n    if (mode_list)\n    {\n       int size;\n       fseek(fin, 0, SEEK_END);\n       size = ftell(fin);\n       fprintf(stderr, \"File size is %d bytes\\n\", size);\n       fseek(fin, 0, SEEK_SET);\n       mode_switch_time = size/sizeof(short)/channels/nb_modes_in_list;\n       fprintf(stderr, \"Switching mode every %d samples\\n\", mode_switch_time);\n    }\n\n    outFile = argv[argc-1];\n    fout = fopen(outFile, \"wb+\");\n    if (!fout)\n    {\n        fprintf (stderr, \"Could not open output file %s\\n\", argv[argc-1]);\n        fclose(fin);\n        return EXIT_FAILURE;\n    }\n\n    if (!decode_only)\n    {\n       enc = opus_encoder_create(sampling_rate, channels, application, &err);\n       if (err != OPUS_OK)\n       {\n          fprintf(stderr, \"Cannot create encoder: %s\\n\", opus_strerror(err));\n          fclose(fin);\n          fclose(fout);\n          return EXIT_FAILURE;\n       }\n       opus_encoder_ctl(enc, OPUS_SET_BITRATE(bitrate_bps));\n       opus_encoder_ctl(enc, OPUS_SET_BANDWIDTH(bandwidth));\n       opus_encoder_ctl(enc, OPUS_SET_VBR(use_vbr));\n       opus_encoder_ctl(enc, OPUS_SET_VBR_CONSTRAINT(cvbr));\n       opus_encoder_ctl(enc, OPUS_SET_COMPLEXITY(complexity));\n       opus_encoder_ctl(enc, OPUS_SET_INBAND_FEC(use_inbandfec));\n       opus_encoder_ctl(enc, OPUS_SET_FORCE_CHANNELS(forcechannels));\n       opus_encoder_ctl(enc, OPUS_SET_DTX(use_dtx));\n       opus_encoder_ctl(enc, OPUS_SET_PACKET_LOSS_PERC(packet_loss_perc));\n\n       opus_encoder_ctl(enc, OPUS_GET_LOOKAHEAD(&skip));\n       opus_encoder_ctl(enc, OPUS_SET_LSB_DEPTH(16));\n       opus_encoder_ctl(enc, OPUS_SET_EXPERT_FRAME_DURATION(variable_duration));\n    }\n    if (!encode_only)\n    {\n       dec = opus_decoder_create(sampling_rate, channels, &err);\n       if (err != OPUS_OK)\n       {\n          fprintf(stderr, \"Cannot create decoder: %s\\n\", opus_strerror(err));\n          fclose(fin);\n          fclose(fout);\n          return EXIT_FAILURE;\n       }\n    }\n\n\n    switch(bandwidth)\n    {\n    case OPUS_BANDWIDTH_NARROWBAND:\n         bandwidth_string = \"narrowband\";\n         break;\n    case OPUS_BANDWIDTH_MEDIUMBAND:\n         bandwidth_string = \"mediumband\";\n         break;\n    case OPUS_BANDWIDTH_WIDEBAND:\n         bandwidth_string = \"wideband\";\n         break;\n    case OPUS_BANDWIDTH_SUPERWIDEBAND:\n         bandwidth_string = \"superwideband\";\n         break;\n    case OPUS_BANDWIDTH_FULLBAND:\n         bandwidth_string = \"fullband\";\n         break;\n    case OPUS_AUTO:\n         bandwidth_string = \"auto bandwidth\";\n         break;\n    default:\n         bandwidth_string = \"unknown\";\n         break;\n    }\n\n    if (decode_only)\n       fprintf(stderr, \"Decoding with %ld Hz output (%d channels)\\n\",\n                       (long)sampling_rate, channels);\n    else\n       fprintf(stderr, \"Encoding %ld Hz input at %.3f kb/s \"\n                       \"in %s with %d-sample frames.\\n\",\n                       (long)sampling_rate, bitrate_bps*0.001,\n                       bandwidth_string, frame_size);\n\n    in = (short*)malloc(max_frame_size*channels*sizeof(short));\n    out = (short*)malloc(max_frame_size*channels*sizeof(short));\n    /* We need to allocate for 16-bit PCM data, but we store it as unsigned char. */\n    fbytes = (unsigned char*)malloc(max_frame_size*channels*sizeof(short));\n    data[0] = (unsigned char*)calloc(max_payload_bytes,sizeof(unsigned char));\n    if ( use_inbandfec ) {\n        data[1] = (unsigned char*)calloc(max_payload_bytes,sizeof(unsigned char));\n    }\n    if(delayed_decision)\n    {\n       if (frame_size==sampling_rate/400)\n          variable_duration = OPUS_FRAMESIZE_2_5_MS;\n       else if (frame_size==sampling_rate/200)\n          variable_duration = OPUS_FRAMESIZE_5_MS;\n       else if (frame_size==sampling_rate/100)\n          variable_duration = OPUS_FRAMESIZE_10_MS;\n       else if (frame_size==sampling_rate/50)\n          variable_duration = OPUS_FRAMESIZE_20_MS;\n       else if (frame_size==sampling_rate/25)\n          variable_duration = OPUS_FRAMESIZE_40_MS;\n       else if (frame_size==3*sampling_rate/50)\n          variable_duration = OPUS_FRAMESIZE_60_MS;\n       else if (frame_size==4*sampling_rate/50)\n          variable_duration = OPUS_FRAMESIZE_80_MS;\n       else if (frame_size==5*sampling_rate/50)\n          variable_duration = OPUS_FRAMESIZE_100_MS;\n       else\n          variable_duration = OPUS_FRAMESIZE_120_MS;\n       opus_encoder_ctl(enc, OPUS_SET_EXPERT_FRAME_DURATION(variable_duration));\n       frame_size = 2*48000;\n    }\n    while (!stop)\n    {\n        if (delayed_celt)\n        {\n            frame_size = newsize;\n            delayed_celt = 0;\n        } else if (random_framesize && rand()%20==0)\n        {\n            newsize = rand()%6;\n            switch(newsize)\n            {\n            case 0: newsize=sampling_rate/400; break;\n            case 1: newsize=sampling_rate/200; break;\n            case 2: newsize=sampling_rate/100; break;\n            case 3: newsize=sampling_rate/50; break;\n            case 4: newsize=sampling_rate/25; break;\n            case 5: newsize=3*sampling_rate/50; break;\n            }\n            while (newsize < sampling_rate/25 && bitrate_bps-abs(sweep_bps) <= 3*12*sampling_rate/newsize)\n               newsize*=2;\n            if (newsize < sampling_rate/100 && frame_size >= sampling_rate/100)\n            {\n                opus_encoder_ctl(enc, OPUS_SET_FORCE_MODE(MODE_CELT_ONLY));\n                delayed_celt=1;\n            } else {\n                frame_size = newsize;\n            }\n        }\n        if (random_fec && rand()%30==0)\n        {\n           opus_encoder_ctl(enc, OPUS_SET_INBAND_FEC(rand()%4==0));\n        }\n        if (decode_only)\n        {\n            unsigned char ch[4];\n            num_read = fread(ch, 1, 4, fin);\n            if (num_read!=4)\n                break;\n            len[toggle] = char_to_int(ch);\n            if (len[toggle]>max_payload_bytes || len[toggle]<0)\n            {\n                fprintf(stderr, \"Invalid payload length: %d\\n\",len[toggle]);\n                break;\n            }\n            num_read = fread(ch, 1, 4, fin);\n            if (num_read!=4)\n                break;\n            enc_final_range[toggle] = char_to_int(ch);\n            num_read = fread(data[toggle], 1, len[toggle], fin);\n            if (num_read!=(size_t)len[toggle])\n            {\n                fprintf(stderr, \"Ran out of input, \"\n                                \"expecting %d bytes got %d\\n\",\n                                len[toggle],(int)num_read);\n                break;\n            }\n        } else {\n            int i;\n            if (mode_list!=NULL)\n            {\n                opus_encoder_ctl(enc, OPUS_SET_BANDWIDTH(mode_list[curr_mode][1]));\n                opus_encoder_ctl(enc, OPUS_SET_FORCE_MODE(mode_list[curr_mode][0]));\n                opus_encoder_ctl(enc, OPUS_SET_FORCE_CHANNELS(mode_list[curr_mode][3]));\n                frame_size = mode_list[curr_mode][2];\n            }\n            num_read = fread(fbytes, sizeof(short)*channels, frame_size-remaining, fin);\n            curr_read = (int)num_read;\n            tot_in += curr_read;\n            for(i=0;i<curr_read*channels;i++)\n            {\n                opus_int32 s;\n                s=fbytes[2*i+1]<<8|fbytes[2*i];\n                s=((s&0xFFFF)^0x8000)-0x8000;\n                in[i+remaining*channels]=s;\n            }\n            if (curr_read+remaining < frame_size)\n            {\n                for (i=(curr_read+remaining)*channels;i<frame_size*channels;i++)\n                   in[i] = 0;\n                if (encode_only || decode_only)\n                   stop = 1;\n            }\n            len[toggle] = opus_encode(enc, in, frame_size, data[toggle], max_payload_bytes);\n            nb_encoded = opus_packet_get_samples_per_frame(data[toggle], sampling_rate)*opus_packet_get_nb_frames(data[toggle], len[toggle]);\n            remaining = frame_size-nb_encoded;\n            for(i=0;i<remaining*channels;i++)\n               in[i] = in[nb_encoded*channels+i];\n            if (sweep_bps!=0)\n            {\n               bitrate_bps += sweep_bps;\n               if (sweep_max)\n               {\n                  if (bitrate_bps > sweep_max)\n                     sweep_bps = -sweep_bps;\n                  else if (bitrate_bps < sweep_min)\n                     sweep_bps = -sweep_bps;\n               }\n               /* safety */\n               if (bitrate_bps<1000)\n                  bitrate_bps = 1000;\n               opus_encoder_ctl(enc, OPUS_SET_BITRATE(bitrate_bps));\n            }\n            opus_encoder_ctl(enc, OPUS_GET_FINAL_RANGE(&enc_final_range[toggle]));\n            if (len[toggle] < 0)\n            {\n                fprintf (stderr, \"opus_encode() returned %d\\n\", len[toggle]);\n                fclose(fin);\n                fclose(fout);\n                return EXIT_FAILURE;\n            }\n            curr_mode_count += frame_size;\n            if (curr_mode_count > mode_switch_time && curr_mode < nb_modes_in_list-1)\n            {\n               curr_mode++;\n               curr_mode_count = 0;\n            }\n        }\n\n#if 0 /* This is for testing the padding code, do not enable by default */\n        if (len[toggle]<1275)\n        {\n           int new_len = len[toggle]+rand()%(max_payload_bytes-len[toggle]);\n           if ((err = opus_packet_pad(data[toggle], len[toggle], new_len)) != OPUS_OK)\n           {\n              fprintf(stderr, \"padding failed: %s\\n\", opus_strerror(err));\n              return EXIT_FAILURE;\n           }\n           len[toggle] = new_len;\n        }\n#endif\n        if (encode_only)\n        {\n            unsigned char int_field[4];\n            int_to_char(len[toggle], int_field);\n            if (fwrite(int_field, 1, 4, fout) != 4) {\n               fprintf(stderr, \"Error writing.\\n\");\n               return EXIT_FAILURE;\n            }\n            int_to_char(enc_final_range[toggle], int_field);\n            if (fwrite(int_field, 1, 4, fout) != 4) {\n               fprintf(stderr, \"Error writing.\\n\");\n               return EXIT_FAILURE;\n            }\n            if (fwrite(data[toggle], 1, len[toggle], fout) != (unsigned)len[toggle]) {\n               fprintf(stderr, \"Error writing.\\n\");\n               return EXIT_FAILURE;\n            }\n            tot_samples += nb_encoded;\n        } else {\n            opus_int32 output_samples;\n            lost = len[toggle]==0 || (packet_loss_perc>0 && rand()%100 < packet_loss_perc);\n            if (lost)\n               opus_decoder_ctl(dec, OPUS_GET_LAST_PACKET_DURATION(&output_samples));\n            else\n               output_samples = max_frame_size;\n            if( count >= use_inbandfec ) {\n                /* delay by one packet when using in-band FEC */\n                if( use_inbandfec  ) {\n                    if( lost_prev ) {\n                        /* attempt to decode with in-band FEC from next packet */\n                        opus_decoder_ctl(dec, OPUS_GET_LAST_PACKET_DURATION(&output_samples));\n                        output_samples = opus_decode(dec, lost ? NULL : data[toggle], len[toggle], out, output_samples, 1);\n                    } else {\n                        /* regular decode */\n                        output_samples = max_frame_size;\n                        output_samples = opus_decode(dec, data[1-toggle], len[1-toggle], out, output_samples, 0);\n                    }\n                } else {\n                    output_samples = opus_decode(dec, lost ? NULL : data[toggle], len[toggle], out, output_samples, 0);\n                }\n                if (output_samples>0)\n                {\n                    if (!decode_only && tot_out + output_samples > tot_in)\n                    {\n                       stop=1;\n                       output_samples = (opus_int32)(tot_in - tot_out);\n                    }\n                    if (output_samples>skip) {\n                       int i;\n                       for(i=0;i<(output_samples-skip)*channels;i++)\n                       {\n                          short s;\n                          s=out[i+(skip*channels)];\n                          fbytes[2*i]=s&0xFF;\n                          fbytes[2*i+1]=(s>>8)&0xFF;\n                       }\n                       if (fwrite(fbytes, sizeof(short)*channels, output_samples-skip, fout) != (unsigned)(output_samples-skip)){\n                          fprintf(stderr, \"Error writing.\\n\");\n                          return EXIT_FAILURE;\n                       }\n                       tot_out += output_samples-skip;\n                    }\n                    if (output_samples<skip) skip -= output_samples;\n                    else skip = 0;\n                } else {\n                   fprintf(stderr, \"error decoding frame: %s\\n\",\n                                   opus_strerror(output_samples));\n                }\n                tot_samples += output_samples;\n            }\n        }\n\n        if (!encode_only)\n           opus_decoder_ctl(dec, OPUS_GET_FINAL_RANGE(&dec_final_range));\n        /* compare final range encoder rng values of encoder and decoder */\n        if( enc_final_range[toggle^use_inbandfec]!=0  && !encode_only\n         && !lost && !lost_prev\n         && dec_final_range != enc_final_range[toggle^use_inbandfec] ) {\n            fprintf (stderr, \"Error: Range coder state mismatch \"\n                             \"between encoder and decoder \"\n                             \"in frame %ld: 0x%8lx vs 0x%8lx\\n\",\n                         (long)count,\n                         (unsigned long)enc_final_range[toggle^use_inbandfec],\n                         (unsigned long)dec_final_range);\n            fclose(fin);\n            fclose(fout);\n            return EXIT_FAILURE;\n        }\n\n        lost_prev = lost;\n        if( count >= use_inbandfec ) {\n            /* count bits */\n            bits += len[toggle]*8;\n            bits_max = ( len[toggle]*8 > bits_max ) ? len[toggle]*8 : bits_max;\n            bits2 += len[toggle]*len[toggle]*64;\n            if (!decode_only)\n            {\n                nrg = 0.0;\n                for ( k = 0; k < frame_size * channels; k++ ) {\n                    nrg += in[ k ] * (double)in[ k ];\n                }\n                nrg /= frame_size * channels;\n                if( nrg > 1e5 ) {\n                    bits_act += len[toggle]*8;\n                    count_act++;\n                }\n            }\n        }\n        count++;\n        toggle = (toggle + use_inbandfec) & 1;\n    }\n\n    /* Print out bitrate statistics */\n    if(decode_only)\n        frame_size = (int)(tot_samples / count);\n    count -= use_inbandfec;\n    fprintf (stderr, \"average bitrate:             %7.3f kb/s\\n\",\n                     1e-3*bits*sampling_rate/tot_samples);\n    fprintf (stderr, \"maximum bitrate:             %7.3f kb/s\\n\",\n                     1e-3*bits_max*sampling_rate/frame_size);\n    if (!decode_only)\n       fprintf (stderr, \"active bitrate:              %7.3f kb/s\\n\",\n               1e-3*bits_act*sampling_rate/(1e-15+frame_size*(double)count_act));\n    fprintf (stderr, \"bitrate standard deviation:  %7.3f kb/s\\n\",\n            1e-3*sqrt(bits2/count - bits*bits/(count*(double)count))*sampling_rate/frame_size);\n    silk_TimerSave(\"opus_timing.txt\");\n    opus_encoder_destroy(enc);\n    opus_decoder_destroy(dec);\n    free(data[0]);\n    if (use_inbandfec)\n        free(data[1]);\n    fclose(fin);\n    fclose(fout);\n    free(in);\n    free(out);\n    free(fbytes);\n    return EXIT_SUCCESS;\n}\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/opus_encoder.c",
    "content": "/* Copyright (c) 2010-2011 Xiph.Org Foundation, Skype Limited\n   Written by Jean-Marc Valin and Koen Vos */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include <stdarg.h>\n#include \"celt.h\"\n#include \"entenc.h\"\n#include \"modes.h\"\n#include \"API.h\"\n#include \"stack_alloc.h\"\n#include \"float_cast.h\"\n#include \"opus.h\"\n#include \"arch.h\"\n#include \"pitch.h\"\n#include \"opus_private.h\"\n#include \"os_support.h\"\n#include \"cpu_support.h\"\n#include \"analysis.h\"\n#include \"mathops.h\"\n#include \"tuning_parameters.h\"\n#ifdef FIXED_POINT\n#include \"fixed/structs_FIX.h\"\n#else\n#include \"float/structs_FLP.h\"\n#endif\n\n#define MAX_ENCODER_BUFFER 480\n\n#ifndef DISABLE_FLOAT_API\n#define PSEUDO_SNR_THRESHOLD 316.23f    /* 10^(25/10) */\n#endif\n\ntypedef struct {\n   opus_val32 XX, XY, YY;\n   opus_val16 smoothed_width;\n   opus_val16 max_follower;\n} StereoWidthState;\n\nstruct OpusEncoder {\n    int          celt_enc_offset;\n    int          silk_enc_offset;\n    silk_EncControlStruct silk_mode;\n    int          application;\n    int          channels;\n    int          delay_compensation;\n    int          force_channels;\n    int          signal_type;\n    int          user_bandwidth;\n    int          max_bandwidth;\n    int          user_forced_mode;\n    int          voice_ratio;\n    opus_int32   Fs;\n    int          use_vbr;\n    int          vbr_constraint;\n    int          variable_duration;\n    opus_int32   bitrate_bps;\n    opus_int32   user_bitrate_bps;\n    int          lsb_depth;\n    int          encoder_buffer;\n    int          lfe;\n    int          arch;\n    int          use_dtx;                 /* general DTX for both SILK and CELT */\n#ifndef DISABLE_FLOAT_API\n    TonalityAnalysisState analysis;\n#endif\n\n#define OPUS_ENCODER_RESET_START stream_channels\n    int          stream_channels;\n    opus_int16   hybrid_stereo_width_Q14;\n    opus_int32   variable_HP_smth2_Q15;\n    opus_val16   prev_HB_gain;\n    opus_val32   hp_mem[4];\n    int          mode;\n    int          prev_mode;\n    int          prev_channels;\n    int          prev_framesize;\n    int          bandwidth;\n    /* Bandwidth determined automatically from the rate (before any other adjustment) */\n    int          auto_bandwidth;\n    int          silk_bw_switch;\n    /* Sampling rate (at the API level) */\n    int          first;\n    opus_val16 * energy_masking;\n    StereoWidthState width_mem;\n    opus_val16   delay_buffer[MAX_ENCODER_BUFFER*2];\n#ifndef DISABLE_FLOAT_API\n    int          detected_bandwidth;\n    int          nb_no_activity_frames;\n    opus_val32   peak_signal_energy;\n#endif\n    int          nonfinal_frame; /* current frame is not the final in a packet */\n    opus_uint32  rangeFinal;\n};\n\n/* Transition tables for the voice and music. First column is the\n   middle (memoriless) threshold. The second column is the hysteresis\n   (difference with the middle) */\nstatic const opus_int32 mono_voice_bandwidth_thresholds[8] = {\n        10000, 1000, /* NB<->MB */\n        11000, 1000, /* MB<->WB */\n        13500, 1000, /* WB<->SWB */\n        14000, 2000, /* SWB<->FB */\n};\nstatic const opus_int32 mono_music_bandwidth_thresholds[8] = {\n        10000, 1000, /* NB<->MB */\n        11000, 1000, /* MB<->WB */\n        13500, 1000, /* WB<->SWB */\n        14000, 2000, /* SWB<->FB */\n};\nstatic const opus_int32 stereo_voice_bandwidth_thresholds[8] = {\n        10000, 1000, /* NB<->MB */\n        11000, 1000, /* MB<->WB */\n        13500, 1000, /* WB<->SWB */\n        14000, 2000, /* SWB<->FB */\n};\nstatic const opus_int32 stereo_music_bandwidth_thresholds[8] = {\n        10000, 1000, /* NB<->MB */\n        11000, 1000, /* MB<->WB */\n        13500, 1000, /* WB<->SWB */\n        14000, 2000, /* SWB<->FB */\n};\n/* Threshold bit-rates for switching between mono and stereo */\nstatic const opus_int32 stereo_voice_threshold = 24000;\nstatic const opus_int32 stereo_music_threshold = 24000;\n\n/* Threshold bit-rate for switching between SILK/hybrid and CELT-only */\nstatic const opus_int32 mode_thresholds[2][2] = {\n      /* voice */ /* music */\n      {  64000,      16000}, /* mono */\n      {  36000,      16000}, /* stereo */\n};\n\nstatic const opus_int32 fec_thresholds[] = {\n        12000, 1000, /* NB */\n        14000, 1000, /* MB */\n        16000, 1000, /* WB */\n        20000, 1000, /* SWB */\n        22000, 1000, /* FB */\n};\n\nint opus_encoder_get_size(int channels)\n{\n    int silkEncSizeBytes, celtEncSizeBytes;\n    int ret;\n    if (channels<1 || channels > 2)\n        return 0;\n    ret = silk_Get_Encoder_Size( &silkEncSizeBytes );\n    if (ret)\n        return 0;\n    silkEncSizeBytes = align(silkEncSizeBytes);\n    celtEncSizeBytes = celt_encoder_get_size(channels);\n    return align(sizeof(OpusEncoder))+silkEncSizeBytes+celtEncSizeBytes;\n}\n\nint opus_encoder_init(OpusEncoder* st, opus_int32 Fs, int channels, int application)\n{\n    void *silk_enc;\n    CELTEncoder *celt_enc;\n    int err;\n    int ret, silkEncSizeBytes;\n\n   if((Fs!=48000&&Fs!=24000&&Fs!=16000&&Fs!=12000&&Fs!=8000)||(channels!=1&&channels!=2)||\n        (application != OPUS_APPLICATION_VOIP && application != OPUS_APPLICATION_AUDIO\n        && application != OPUS_APPLICATION_RESTRICTED_LOWDELAY))\n        return OPUS_BAD_ARG;\n\n    OPUS_CLEAR((char*)st, opus_encoder_get_size(channels));\n    /* Create SILK encoder */\n    ret = silk_Get_Encoder_Size( &silkEncSizeBytes );\n    if (ret)\n        return OPUS_BAD_ARG;\n    silkEncSizeBytes = align(silkEncSizeBytes);\n    st->silk_enc_offset = align(sizeof(OpusEncoder));\n    st->celt_enc_offset = st->silk_enc_offset+silkEncSizeBytes;\n    silk_enc = (char*)st+st->silk_enc_offset;\n    celt_enc = (CELTEncoder*)((char*)st+st->celt_enc_offset);\n\n    st->stream_channels = st->channels = channels;\n\n    st->Fs = Fs;\n\n    st->arch = opus_select_arch();\n\n    ret = silk_InitEncoder( silk_enc, st->arch, &st->silk_mode );\n    if(ret)return OPUS_INTERNAL_ERROR;\n\n    /* default SILK parameters */\n    st->silk_mode.nChannelsAPI              = channels;\n    st->silk_mode.nChannelsInternal         = channels;\n    st->silk_mode.API_sampleRate            = st->Fs;\n    st->silk_mode.maxInternalSampleRate     = 16000;\n    st->silk_mode.minInternalSampleRate     = 8000;\n    st->silk_mode.desiredInternalSampleRate = 16000;\n    st->silk_mode.payloadSize_ms            = 20;\n    st->silk_mode.bitRate                   = 25000;\n    st->silk_mode.packetLossPercentage      = 0;\n    st->silk_mode.complexity                = 9;\n    st->silk_mode.useInBandFEC              = 0;\n    st->silk_mode.useDTX                    = 0;\n    st->silk_mode.useCBR                    = 0;\n    st->silk_mode.reducedDependency         = 0;\n\n    /* Create CELT encoder */\n    /* Initialize CELT encoder */\n    err = celt_encoder_init(celt_enc, Fs, channels, st->arch);\n    if(err!=OPUS_OK)return OPUS_INTERNAL_ERROR;\n\n    celt_encoder_ctl(celt_enc, CELT_SET_SIGNALLING(0));\n    celt_encoder_ctl(celt_enc, OPUS_SET_COMPLEXITY(st->silk_mode.complexity));\n\n    st->use_vbr = 1;\n    /* Makes constrained VBR the default (safer for real-time use) */\n    st->vbr_constraint = 1;\n    st->user_bitrate_bps = OPUS_AUTO;\n    st->bitrate_bps = 3000+Fs*channels;\n    st->application = application;\n    st->signal_type = OPUS_AUTO;\n    st->user_bandwidth = OPUS_AUTO;\n    st->max_bandwidth = OPUS_BANDWIDTH_FULLBAND;\n    st->force_channels = OPUS_AUTO;\n    st->user_forced_mode = OPUS_AUTO;\n    st->voice_ratio = -1;\n    st->encoder_buffer = st->Fs/100;\n    st->lsb_depth = 24;\n    st->variable_duration = OPUS_FRAMESIZE_ARG;\n\n    /* Delay compensation of 4 ms (2.5 ms for SILK's extra look-ahead\n       + 1.5 ms for SILK resamplers and stereo prediction) */\n    st->delay_compensation = st->Fs/250;\n\n    st->hybrid_stereo_width_Q14 = 1 << 14;\n    st->prev_HB_gain = Q15ONE;\n    st->variable_HP_smth2_Q15 = silk_LSHIFT( silk_lin2log( VARIABLE_HP_MIN_CUTOFF_HZ ), 8 );\n    st->first = 1;\n    st->mode = MODE_HYBRID;\n    st->bandwidth = OPUS_BANDWIDTH_FULLBAND;\n\n#ifndef DISABLE_FLOAT_API\n    tonality_analysis_init(&st->analysis, st->Fs);\n    st->analysis.application = st->application;\n#endif\n\n    return OPUS_OK;\n}\n\nstatic unsigned char gen_toc(int mode, int framerate, int bandwidth, int channels)\n{\n   int period;\n   unsigned char toc;\n   period = 0;\n   while (framerate < 400)\n   {\n       framerate <<= 1;\n       period++;\n   }\n   if (mode == MODE_SILK_ONLY)\n   {\n       toc = (bandwidth-OPUS_BANDWIDTH_NARROWBAND)<<5;\n       toc |= (period-2)<<3;\n   } else if (mode == MODE_CELT_ONLY)\n   {\n       int tmp = bandwidth-OPUS_BANDWIDTH_MEDIUMBAND;\n       if (tmp < 0)\n           tmp = 0;\n       toc = 0x80;\n       toc |= tmp << 5;\n       toc |= period<<3;\n   } else /* Hybrid */\n   {\n       toc = 0x60;\n       toc |= (bandwidth-OPUS_BANDWIDTH_SUPERWIDEBAND)<<4;\n       toc |= (period-2)<<3;\n   }\n   toc |= (channels==2)<<2;\n   return toc;\n}\n\n#ifndef FIXED_POINT\nstatic void silk_biquad_float(\n    const opus_val16      *in,            /* I:    Input signal                   */\n    const opus_int32      *B_Q28,         /* I:    MA coefficients [3]            */\n    const opus_int32      *A_Q28,         /* I:    AR coefficients [2]            */\n    opus_val32            *S,             /* I/O:  State vector [2]               */\n    opus_val16            *out,           /* O:    Output signal                  */\n    const opus_int32      len,            /* I:    Signal length (must be even)   */\n    int stride\n)\n{\n    /* DIRECT FORM II TRANSPOSED (uses 2 element state vector) */\n    opus_int   k;\n    opus_val32 vout;\n    opus_val32 inval;\n    opus_val32 A[2], B[3];\n\n    A[0] = (opus_val32)(A_Q28[0] * (1.f/((opus_int32)1<<28)));\n    A[1] = (opus_val32)(A_Q28[1] * (1.f/((opus_int32)1<<28)));\n    B[0] = (opus_val32)(B_Q28[0] * (1.f/((opus_int32)1<<28)));\n    B[1] = (opus_val32)(B_Q28[1] * (1.f/((opus_int32)1<<28)));\n    B[2] = (opus_val32)(B_Q28[2] * (1.f/((opus_int32)1<<28)));\n\n    /* Negate A_Q28 values and split in two parts */\n\n    for( k = 0; k < len; k++ ) {\n        /* S[ 0 ], S[ 1 ]: Q12 */\n        inval = in[ k*stride ];\n        vout = S[ 0 ] + B[0]*inval;\n\n        S[ 0 ] = S[1] - vout*A[0] + B[1]*inval;\n\n        S[ 1 ] = - vout*A[1] + B[2]*inval + VERY_SMALL;\n\n        /* Scale back to Q0 and saturate */\n        out[ k*stride ] = vout;\n    }\n}\n#endif\n\nstatic void hp_cutoff(const opus_val16 *in, opus_int32 cutoff_Hz, opus_val16 *out, opus_val32 *hp_mem, int len, int channels, opus_int32 Fs, int arch)\n{\n   opus_int32 B_Q28[ 3 ], A_Q28[ 2 ];\n   opus_int32 Fc_Q19, r_Q28, r_Q22;\n   (void)arch;\n\n   silk_assert( cutoff_Hz <= silk_int32_MAX / SILK_FIX_CONST( 1.5 * 3.14159 / 1000, 19 ) );\n   Fc_Q19 = silk_DIV32_16( silk_SMULBB( SILK_FIX_CONST( 1.5 * 3.14159 / 1000, 19 ), cutoff_Hz ), Fs/1000 );\n   silk_assert( Fc_Q19 > 0 && Fc_Q19 < 32768 );\n\n   r_Q28 = SILK_FIX_CONST( 1.0, 28 ) - silk_MUL( SILK_FIX_CONST( 0.92, 9 ), Fc_Q19 );\n\n   /* b = r * [ 1; -2; 1 ]; */\n   /* a = [ 1; -2 * r * ( 1 - 0.5 * Fc^2 ); r^2 ]; */\n   B_Q28[ 0 ] = r_Q28;\n   B_Q28[ 1 ] = silk_LSHIFT( -r_Q28, 1 );\n   B_Q28[ 2 ] = r_Q28;\n\n   /* -r * ( 2 - Fc * Fc ); */\n   r_Q22  = silk_RSHIFT( r_Q28, 6 );\n   A_Q28[ 0 ] = silk_SMULWW( r_Q22, silk_SMULWW( Fc_Q19, Fc_Q19 ) - SILK_FIX_CONST( 2.0,  22 ) );\n   A_Q28[ 1 ] = silk_SMULWW( r_Q22, r_Q22 );\n\n#ifdef FIXED_POINT\n   if( channels == 1 ) {\n      silk_biquad_alt_stride1( in, B_Q28, A_Q28, hp_mem, out, len );\n   } else {\n      silk_biquad_alt_stride2( in, B_Q28, A_Q28, hp_mem, out, len, arch );\n   }\n#else\n   silk_biquad_float( in, B_Q28, A_Q28, hp_mem, out, len, channels );\n   if( channels == 2 ) {\n       silk_biquad_float( in+1, B_Q28, A_Q28, hp_mem+2, out+1, len, channels );\n   }\n#endif\n}\n\n#ifdef FIXED_POINT\nstatic void dc_reject(const opus_val16 *in, opus_int32 cutoff_Hz, opus_val16 *out, opus_val32 *hp_mem, int len, int channels, opus_int32 Fs)\n{\n   int c, i;\n   int shift;\n\n   /* Approximates -round(log2(4.*cutoff_Hz/Fs)) */\n   shift=celt_ilog2(Fs/(cutoff_Hz*3));\n   for (c=0;c<channels;c++)\n   {\n      for (i=0;i<len;i++)\n      {\n         opus_val32 x, tmp, y;\n         x = SHL32(EXTEND32(in[channels*i+c]), 14);\n         /* First stage */\n         tmp = x-hp_mem[2*c];\n         hp_mem[2*c] = hp_mem[2*c] + PSHR32(x - hp_mem[2*c], shift);\n         /* Second stage */\n         y = tmp - hp_mem[2*c+1];\n         hp_mem[2*c+1] = hp_mem[2*c+1] + PSHR32(tmp - hp_mem[2*c+1], shift);\n         out[channels*i+c] = EXTRACT16(SATURATE(PSHR32(y, 14), 32767));\n      }\n   }\n}\n\n#else\nstatic void dc_reject(const opus_val16 *in, opus_int32 cutoff_Hz, opus_val16 *out, opus_val32 *hp_mem, int len, int channels, opus_int32 Fs)\n{\n   int i;\n   float coef, coef2;\n   coef = 4.0f*cutoff_Hz/Fs;\n   coef2 = 1-coef;\n   if (channels==2)\n   {\n      float m0, m1, m2, m3;\n      m0 = hp_mem[0];\n      m1 = hp_mem[1];\n      m2 = hp_mem[2];\n      m3 = hp_mem[3];\n      for (i=0;i<len;i++)\n      {\n         opus_val32 x0, x1, tmp0, tmp1, out0, out1;\n         x0 = in[2*i+0];\n         x1 = in[2*i+1];\n         /* First stage */\n         tmp0 = x0-m0;\n         tmp1 = x1-m2;\n         m0 = coef*x0 + VERY_SMALL + coef2*m0;\n         m2 = coef*x1 + VERY_SMALL + coef2*m2;\n         /* Second stage */\n         out0 = tmp0 - m1;\n         out1 = tmp1 - m3;\n         m1 = coef*tmp0 + VERY_SMALL + coef2*m1;\n         m3 = coef*tmp1 + VERY_SMALL + coef2*m3;\n         out[2*i+0] = out0;\n         out[2*i+1] = out1;\n      }\n      hp_mem[0] = m0;\n      hp_mem[1] = m1;\n      hp_mem[2] = m2;\n      hp_mem[3] = m3;\n   } else {\n      float m0, m1;\n      m0 = hp_mem[0];\n      m1 = hp_mem[1];\n      for (i=0;i<len;i++)\n      {\n         opus_val32 x, tmp, y;\n         x = in[i];\n         /* First stage */\n         tmp = x-m0;\n         m0 = coef*x + VERY_SMALL + coef2*m0;\n         /* Second stage */\n         y = tmp - m1;\n         m1 = coef*tmp + VERY_SMALL + coef2*m1;\n         out[i] = y;\n      }\n      hp_mem[0] = m0;\n      hp_mem[1] = m1;\n   }\n}\n#endif\n\nstatic void stereo_fade(const opus_val16 *in, opus_val16 *out, opus_val16 g1, opus_val16 g2,\n        int overlap48, int frame_size, int channels, const opus_val16 *window, opus_int32 Fs)\n{\n    int i;\n    int overlap;\n    int inc;\n    inc = 48000/Fs;\n    overlap=overlap48/inc;\n    g1 = Q15ONE-g1;\n    g2 = Q15ONE-g2;\n    for (i=0;i<overlap;i++)\n    {\n       opus_val32 diff;\n       opus_val16 g, w;\n       w = MULT16_16_Q15(window[i*inc], window[i*inc]);\n       g = SHR32(MAC16_16(MULT16_16(w,g2),\n             Q15ONE-w, g1), 15);\n       diff = EXTRACT16(HALF32((opus_val32)in[i*channels] - (opus_val32)in[i*channels+1]));\n       diff = MULT16_16_Q15(g, diff);\n       out[i*channels] = out[i*channels] - diff;\n       out[i*channels+1] = out[i*channels+1] + diff;\n    }\n    for (;i<frame_size;i++)\n    {\n       opus_val32 diff;\n       diff = EXTRACT16(HALF32((opus_val32)in[i*channels] - (opus_val32)in[i*channels+1]));\n       diff = MULT16_16_Q15(g2, diff);\n       out[i*channels] = out[i*channels] - diff;\n       out[i*channels+1] = out[i*channels+1] + diff;\n    }\n}\n\nstatic void gain_fade(const opus_val16 *in, opus_val16 *out, opus_val16 g1, opus_val16 g2,\n        int overlap48, int frame_size, int channels, const opus_val16 *window, opus_int32 Fs)\n{\n    int i;\n    int inc;\n    int overlap;\n    int c;\n    inc = 48000/Fs;\n    overlap=overlap48/inc;\n    if (channels==1)\n    {\n       for (i=0;i<overlap;i++)\n       {\n          opus_val16 g, w;\n          w = MULT16_16_Q15(window[i*inc], window[i*inc]);\n          g = SHR32(MAC16_16(MULT16_16(w,g2),\n                Q15ONE-w, g1), 15);\n          out[i] = MULT16_16_Q15(g, in[i]);\n       }\n    } else {\n       for (i=0;i<overlap;i++)\n       {\n          opus_val16 g, w;\n          w = MULT16_16_Q15(window[i*inc], window[i*inc]);\n          g = SHR32(MAC16_16(MULT16_16(w,g2),\n                Q15ONE-w, g1), 15);\n          out[i*2] = MULT16_16_Q15(g, in[i*2]);\n          out[i*2+1] = MULT16_16_Q15(g, in[i*2+1]);\n       }\n    }\n    c=0;do {\n       for (i=overlap;i<frame_size;i++)\n       {\n          out[i*channels+c] = MULT16_16_Q15(g2, in[i*channels+c]);\n       }\n    }\n    while (++c<channels);\n}\n\nOpusEncoder *opus_encoder_create(opus_int32 Fs, int channels, int application, int *error)\n{\n   int ret;\n   OpusEncoder *st;\n   if((Fs!=48000&&Fs!=24000&&Fs!=16000&&Fs!=12000&&Fs!=8000)||(channels!=1&&channels!=2)||\n       (application != OPUS_APPLICATION_VOIP && application != OPUS_APPLICATION_AUDIO\n       && application != OPUS_APPLICATION_RESTRICTED_LOWDELAY))\n   {\n      if (error)\n         *error = OPUS_BAD_ARG;\n      return NULL;\n   }\n   st = (OpusEncoder *)opus_alloc(opus_encoder_get_size(channels));\n   if (st == NULL)\n   {\n      if (error)\n         *error = OPUS_ALLOC_FAIL;\n      return NULL;\n   }\n   ret = opus_encoder_init(st, Fs, channels, application);\n   if (error)\n      *error = ret;\n   if (ret != OPUS_OK)\n   {\n      opus_free(st);\n      st = NULL;\n   }\n   return st;\n}\n\nstatic opus_int32 user_bitrate_to_bitrate(OpusEncoder *st, int frame_size, int max_data_bytes)\n{\n  if(!frame_size)frame_size=st->Fs/400;\n  if (st->user_bitrate_bps==OPUS_AUTO)\n    return 60*st->Fs/frame_size + st->Fs*st->channels;\n  else if (st->user_bitrate_bps==OPUS_BITRATE_MAX)\n    return max_data_bytes*8*st->Fs/frame_size;\n  else\n    return st->user_bitrate_bps;\n}\n\n#ifndef DISABLE_FLOAT_API\n#ifdef FIXED_POINT\n#define PCM2VAL(x) FLOAT2INT16(x)\n#else\n#define PCM2VAL(x) SCALEIN(x)\n#endif\n\nvoid downmix_float(const void *_x, opus_val32 *y, int subframe, int offset, int c1, int c2, int C)\n{\n   const float *x;\n   int j;\n\n   x = (const float *)_x;\n   for (j=0;j<subframe;j++)\n      y[j] = PCM2VAL(x[(j+offset)*C+c1]);\n   if (c2>-1)\n   {\n      for (j=0;j<subframe;j++)\n         y[j] += PCM2VAL(x[(j+offset)*C+c2]);\n   } else if (c2==-2)\n   {\n      int c;\n      for (c=1;c<C;c++)\n      {\n         for (j=0;j<subframe;j++)\n            y[j] += PCM2VAL(x[(j+offset)*C+c]);\n      }\n   }\n}\n#endif\n\nvoid downmix_int(const void *_x, opus_val32 *y, int subframe, int offset, int c1, int c2, int C)\n{\n   const opus_int16 *x;\n   int j;\n\n   x = (const opus_int16 *)_x;\n   for (j=0;j<subframe;j++)\n      y[j] = x[(j+offset)*C+c1];\n   if (c2>-1)\n   {\n      for (j=0;j<subframe;j++)\n         y[j] += x[(j+offset)*C+c2];\n   } else if (c2==-2)\n   {\n      int c;\n      for (c=1;c<C;c++)\n      {\n         for (j=0;j<subframe;j++)\n            y[j] += x[(j+offset)*C+c];\n      }\n   }\n}\n\nopus_int32 frame_size_select(opus_int32 frame_size, int variable_duration, opus_int32 Fs)\n{\n   int new_size;\n   if (frame_size<Fs/400)\n      return -1;\n   if (variable_duration == OPUS_FRAMESIZE_ARG)\n      new_size = frame_size;\n   else if (variable_duration >= OPUS_FRAMESIZE_2_5_MS && variable_duration <= OPUS_FRAMESIZE_120_MS)\n   {\n      if (variable_duration <= OPUS_FRAMESIZE_40_MS)\n         new_size = (Fs/400)<<(variable_duration-OPUS_FRAMESIZE_2_5_MS);\n      else\n         new_size = (variable_duration-OPUS_FRAMESIZE_2_5_MS-2)*Fs/50;\n   }\n   else\n      return -1;\n   if (new_size>frame_size)\n      return -1;\n   if (400*new_size!=Fs   && 200*new_size!=Fs   && 100*new_size!=Fs   &&\n        50*new_size!=Fs   &&  25*new_size!=Fs   &&  50*new_size!=3*Fs &&\n        50*new_size!=4*Fs &&  50*new_size!=5*Fs &&  50*new_size!=6*Fs)\n      return -1;\n   return new_size;\n}\n\nopus_val16 compute_stereo_width(const opus_val16 *pcm, int frame_size, opus_int32 Fs, StereoWidthState *mem)\n{\n   opus_val32 xx, xy, yy;\n   opus_val16 sqrt_xx, sqrt_yy;\n   opus_val16 qrrt_xx, qrrt_yy;\n   int frame_rate;\n   int i;\n   opus_val16 short_alpha;\n\n   frame_rate = Fs/frame_size;\n   short_alpha = Q15ONE - MULT16_16(25, Q15ONE)/IMAX(50,frame_rate);\n   xx=xy=yy=0;\n   /* Unroll by 4. The frame size is always a multiple of 4 *except* for\n      2.5 ms frames at 12 kHz. Since this setting is very rare (and very\n      stupid), we just discard the last two samples. */\n   for (i=0;i<frame_size-3;i+=4)\n   {\n      opus_val32 pxx=0;\n      opus_val32 pxy=0;\n      opus_val32 pyy=0;\n      opus_val16 x, y;\n      x = pcm[2*i];\n      y = pcm[2*i+1];\n      pxx = SHR32(MULT16_16(x,x),2);\n      pxy = SHR32(MULT16_16(x,y),2);\n      pyy = SHR32(MULT16_16(y,y),2);\n      x = pcm[2*i+2];\n      y = pcm[2*i+3];\n      pxx += SHR32(MULT16_16(x,x),2);\n      pxy += SHR32(MULT16_16(x,y),2);\n      pyy += SHR32(MULT16_16(y,y),2);\n      x = pcm[2*i+4];\n      y = pcm[2*i+5];\n      pxx += SHR32(MULT16_16(x,x),2);\n      pxy += SHR32(MULT16_16(x,y),2);\n      pyy += SHR32(MULT16_16(y,y),2);\n      x = pcm[2*i+6];\n      y = pcm[2*i+7];\n      pxx += SHR32(MULT16_16(x,x),2);\n      pxy += SHR32(MULT16_16(x,y),2);\n      pyy += SHR32(MULT16_16(y,y),2);\n\n      xx += SHR32(pxx, 10);\n      xy += SHR32(pxy, 10);\n      yy += SHR32(pyy, 10);\n   }\n   mem->XX += MULT16_32_Q15(short_alpha, xx-mem->XX);\n   mem->XY += MULT16_32_Q15(short_alpha, xy-mem->XY);\n   mem->YY += MULT16_32_Q15(short_alpha, yy-mem->YY);\n   mem->XX = MAX32(0, mem->XX);\n   mem->XY = MAX32(0, mem->XY);\n   mem->YY = MAX32(0, mem->YY);\n   if (MAX32(mem->XX, mem->YY)>QCONST16(8e-4f, 18))\n   {\n      opus_val16 corr;\n      opus_val16 ldiff;\n      opus_val16 width;\n      sqrt_xx = celt_sqrt(mem->XX);\n      sqrt_yy = celt_sqrt(mem->YY);\n      qrrt_xx = celt_sqrt(sqrt_xx);\n      qrrt_yy = celt_sqrt(sqrt_yy);\n      /* Inter-channel correlation */\n      mem->XY = MIN32(mem->XY, sqrt_xx*sqrt_yy);\n      corr = SHR32(frac_div32(mem->XY,EPSILON+MULT16_16(sqrt_xx,sqrt_yy)),16);\n      /* Approximate loudness difference */\n      ldiff = MULT16_16(Q15ONE, ABS16(qrrt_xx-qrrt_yy))/(EPSILON+qrrt_xx+qrrt_yy);\n      width = MULT16_16_Q15(celt_sqrt(QCONST32(1.f,30)-MULT16_16(corr,corr)), ldiff);\n      /* Smoothing over one second */\n      mem->smoothed_width += (width-mem->smoothed_width)/frame_rate;\n      /* Peak follower */\n      mem->max_follower = MAX16(mem->max_follower-QCONST16(.02f,15)/frame_rate, mem->smoothed_width);\n   }\n   /*printf(\"%f %f %f %f %f \", corr/(float)Q15ONE, ldiff/(float)Q15ONE, width/(float)Q15ONE, mem->smoothed_width/(float)Q15ONE, mem->max_follower/(float)Q15ONE);*/\n   return EXTRACT16(MIN32(Q15ONE, MULT16_16(20, mem->max_follower)));\n}\n\nstatic int decide_fec(int useInBandFEC, int PacketLoss_perc, int last_fec, int mode, int *bandwidth, opus_int32 rate)\n{\n   int orig_bandwidth;\n   if (!useInBandFEC || PacketLoss_perc == 0 || mode == MODE_CELT_ONLY)\n      return 0;\n   orig_bandwidth = *bandwidth;\n   for (;;)\n   {\n      opus_int32 hysteresis;\n      opus_int32 LBRR_rate_thres_bps;\n      /* Compute threshold for using FEC at the current bandwidth setting */\n      LBRR_rate_thres_bps = fec_thresholds[2*(*bandwidth - OPUS_BANDWIDTH_NARROWBAND)];\n      hysteresis = fec_thresholds[2*(*bandwidth - OPUS_BANDWIDTH_NARROWBAND) + 1];\n      if (last_fec == 1) LBRR_rate_thres_bps -= hysteresis;\n      if (last_fec == 0) LBRR_rate_thres_bps += hysteresis;\n      LBRR_rate_thres_bps = silk_SMULWB( silk_MUL( LBRR_rate_thres_bps,\n            125 - silk_min( PacketLoss_perc, 25 ) ), SILK_FIX_CONST( 0.01, 16 ) );\n      /* If loss <= 5%, we look at whether we have enough rate to enable FEC.\n         If loss > 5%, we decrease the bandwidth until we can enable FEC. */\n      if (rate > LBRR_rate_thres_bps)\n         return 1;\n      else if (PacketLoss_perc <= 5)\n         return 0;\n      else if (*bandwidth > OPUS_BANDWIDTH_NARROWBAND)\n         (*bandwidth)--;\n      else\n         break;\n   }\n   /* Couldn't find any bandwidth to enable FEC, keep original bandwidth. */\n   *bandwidth = orig_bandwidth;\n   return 0;\n}\n\nstatic int compute_silk_rate_for_hybrid(int rate, int bandwidth, int frame20ms, int vbr, int fec) {\n   int entry;\n   int i;\n   int N;\n   int silk_rate;\n   static int rate_table[][5] = {\n  /*  |total| |-------- SILK------------|\n              |-- No FEC -| |--- FEC ---|\n               10ms   20ms   10ms   20ms */\n      {    0,     0,     0,     0,     0},\n      {12000, 10000, 10000, 11000, 11000},\n      {16000, 13500, 13500, 15000, 15000},\n      {20000, 16000, 16000, 18000, 18000},\n      {24000, 18000, 18000, 21000, 21000},\n      {32000, 22000, 22000, 28000, 28000},\n      {64000, 38000, 38000, 50000, 50000}\n   };\n   entry = 1 + frame20ms + 2*fec;\n   N = sizeof(rate_table)/sizeof(rate_table[0]);\n   for (i=1;i<N;i++)\n   {\n      if (rate_table[i][0] > rate) break;\n   }\n   if (i == N)\n   {\n      silk_rate = rate_table[i-1][entry];\n      /* For now, just give 50% of the extra bits to SILK. */\n      silk_rate += (rate-rate_table[i-1][0])/2;\n   } else {\n      opus_int32 lo, hi, x0, x1;\n      lo = rate_table[i-1][entry];\n      hi = rate_table[i][entry];\n      x0 = rate_table[i-1][0];\n      x1 = rate_table[i][0];\n      silk_rate = (lo*(x1-rate) + hi*(rate-x0))/(x1-x0);\n   }\n   if (!vbr)\n   {\n      /* Tiny boost to SILK for CBR. We should probably tune this better. */\n      silk_rate += 100;\n   }\n   if (bandwidth==OPUS_BANDWIDTH_SUPERWIDEBAND)\n      silk_rate += 300;\n   return silk_rate;\n}\n\n/* Returns the equivalent bitrate corresponding to 20 ms frames,\n   complexity 10 VBR operation. */\nstatic opus_int32 compute_equiv_rate(opus_int32 bitrate, int channels,\n      int frame_rate, int vbr, int mode, int complexity, int loss)\n{\n   opus_int32 equiv;\n   equiv = bitrate;\n   /* Take into account overhead from smaller frames. */\n   equiv -= (40*channels+20)*(frame_rate - 50);\n   /* CBR is about a 8% penalty for both SILK and CELT. */\n   if (!vbr)\n      equiv -= equiv/12;\n   /* Complexity makes about 10% difference (from 0 to 10) in general. */\n   equiv = equiv * (90+complexity)/100;\n   if (mode == MODE_SILK_ONLY || mode == MODE_HYBRID)\n   {\n      /* SILK complexity 0-1 uses the non-delayed-decision NSQ, which\n         costs about 20%. */\n      if (complexity<2)\n         equiv = equiv*4/5;\n      equiv -= equiv*loss/(6*loss + 10);\n   } else if (mode == MODE_CELT_ONLY) {\n      /* CELT complexity 0-4 doesn't have the pitch filter, which costs\n         about 10%. */\n      if (complexity<5)\n         equiv = equiv*9/10;\n   } else {\n      /* Mode not known yet */\n      /* Half the SILK loss*/\n      equiv -= equiv*loss/(12*loss + 20);\n   }\n   return equiv;\n}\n\n#ifndef DISABLE_FLOAT_API\n\nstatic int is_digital_silence(const opus_val16* pcm, int frame_size, int channels, int lsb_depth)\n{\n   int silence = 0;\n   opus_val32 sample_max = 0;\n#ifdef MLP_TRAINING\n   return 0;\n#endif\n   sample_max = celt_maxabs16(pcm, frame_size*channels);\n\n#ifdef FIXED_POINT\n   silence = (sample_max == 0);\n   (void)lsb_depth;\n#else\n   silence = (sample_max <= (opus_val16) 1 / (1 << lsb_depth));\n#endif\n\n   return silence;\n}\n\n#ifdef FIXED_POINT\nstatic opus_val32 compute_frame_energy(const opus_val16 *pcm, int frame_size, int channels, int arch)\n{\n   int i;\n   opus_val32 sample_max;\n   int max_shift;\n   int shift;\n   opus_val32 energy = 0;\n   int len = frame_size*channels;\n   (void)arch;\n   /* Max amplitude in the signal */\n   sample_max = celt_maxabs16(pcm, len);\n\n   /* Compute the right shift required in the MAC to avoid an overflow */\n   max_shift = celt_ilog2(len);\n   shift = IMAX(0, (celt_ilog2(sample_max) << 1) + max_shift - 28);\n\n   /* Compute the energy */\n   for (i=0; i<len; i++)\n      energy += SHR32(MULT16_16(pcm[i], pcm[i]), shift);\n\n   /* Normalize energy by the frame size and left-shift back to the original position */\n   energy /= len;\n   energy = SHL32(energy, shift);\n\n   return energy;\n}\n#else\nstatic opus_val32 compute_frame_energy(const opus_val16 *pcm, int frame_size, int channels, int arch)\n{\n   int len = frame_size*channels;\n   return celt_inner_prod(pcm, pcm, len, arch)/len;\n}\n#endif\n\n/* Decides if DTX should be turned on (=1) or off (=0) */\nstatic int decide_dtx_mode(float activity_probability,    /* probability that current frame contains speech/music */\n                           int *nb_no_activity_frames,    /* number of consecutive frames with no activity */\n                           opus_val32 peak_signal_energy, /* peak energy of desired signal detected so far */\n                           const opus_val16 *pcm,         /* input pcm signal */\n                           int frame_size,                /* frame size */\n                           int channels,\n                           int is_silence,                 /* only digital silence detected in this frame */\n                           int arch\n                          )\n{\n   int is_noise;\n   opus_val32 noise_energy;\n   int is_sufficiently_quiet;\n\n   if (!is_silence)\n   {\n      is_noise = activity_probability < DTX_ACTIVITY_THRESHOLD;\n      if (is_noise)\n      {\n         noise_energy = compute_frame_energy(pcm, frame_size, channels, arch);\n         is_sufficiently_quiet = peak_signal_energy >= (PSEUDO_SNR_THRESHOLD * noise_energy);\n      }\n   }\n\n   if (is_silence || (is_noise && is_sufficiently_quiet))\n   {\n      /* The number of consecutive DTX frames should be within the allowed bounds */\n      (*nb_no_activity_frames)++;\n\n      if (*nb_no_activity_frames > NB_SPEECH_FRAMES_BEFORE_DTX)\n      {\n         if (*nb_no_activity_frames <= (NB_SPEECH_FRAMES_BEFORE_DTX + MAX_CONSECUTIVE_DTX))\n            /* Valid frame for DTX! */\n            return 1;\n         else\n            (*nb_no_activity_frames) = NB_SPEECH_FRAMES_BEFORE_DTX;\n      }\n   } else\n      (*nb_no_activity_frames) = 0;\n\n   return 0;\n}\n\n#endif\n\nstatic opus_int32 encode_multiframe_packet(OpusEncoder *st,\n                                           const opus_val16 *pcm,\n                                           int nb_frames,\n                                           int frame_size,\n                                           unsigned char *data,\n                                           opus_int32 out_data_bytes,\n                                           int to_celt,\n                                           int lsb_depth,\n                                           int float_api)\n{\n   int i;\n   int ret = 0;\n   VARDECL(unsigned char, tmp_data);\n   int bak_mode, bak_bandwidth, bak_channels, bak_to_mono;\n   VARDECL(OpusRepacketizer, rp);\n   int max_header_bytes;\n   opus_int32 bytes_per_frame;\n   opus_int32 cbr_bytes;\n   opus_int32 repacketize_len;\n   int tmp_len;\n   ALLOC_STACK;\n\n   /* Worst cases:\n    * 2 frames: Code 2 with different compressed sizes\n    * >2 frames: Code 3 VBR */\n   max_header_bytes = nb_frames == 2 ? 3 : (2+(nb_frames-1)*2);\n\n   if (st->use_vbr || st->user_bitrate_bps==OPUS_BITRATE_MAX)\n      repacketize_len = out_data_bytes;\n   else {\n      cbr_bytes = 3*st->bitrate_bps/(3*8*st->Fs/(frame_size*nb_frames));\n      repacketize_len = IMIN(cbr_bytes, out_data_bytes);\n   }\n   bytes_per_frame = IMIN(1276, 1+(repacketize_len-max_header_bytes)/nb_frames);\n\n   ALLOC(tmp_data, nb_frames*bytes_per_frame, unsigned char);\n   ALLOC(rp, 1, OpusRepacketizer);\n   opus_repacketizer_init(rp);\n\n   bak_mode = st->user_forced_mode;\n   bak_bandwidth = st->user_bandwidth;\n   bak_channels = st->force_channels;\n\n   st->user_forced_mode = st->mode;\n   st->user_bandwidth = st->bandwidth;\n   st->force_channels = st->stream_channels;\n\n   bak_to_mono = st->silk_mode.toMono;\n   if (bak_to_mono)\n      st->force_channels = 1;\n   else\n      st->prev_channels = st->stream_channels;\n\n   for (i=0;i<nb_frames;i++)\n   {\n      st->silk_mode.toMono = 0;\n      st->nonfinal_frame = i<(nb_frames-1);\n\n      /* When switching from SILK/Hybrid to CELT, only ask for a switch at the last frame */\n      if (to_celt && i==nb_frames-1)\n         st->user_forced_mode = MODE_CELT_ONLY;\n\n      tmp_len = opus_encode_native(st, pcm+i*(st->channels*frame_size), frame_size,\n         tmp_data+i*bytes_per_frame, bytes_per_frame, lsb_depth, NULL, 0, 0, 0, 0,\n         NULL, float_api);\n\n      if (tmp_len<0)\n      {\n         RESTORE_STACK;\n         return OPUS_INTERNAL_ERROR;\n      }\n\n      ret = opus_repacketizer_cat(rp, tmp_data+i*bytes_per_frame, tmp_len);\n\n      if (ret<0)\n      {\n         RESTORE_STACK;\n         return OPUS_INTERNAL_ERROR;\n      }\n   }\n\n   ret = opus_repacketizer_out_range_impl(rp, 0, nb_frames, data, repacketize_len, 0, !st->use_vbr);\n\n   if (ret<0)\n   {\n      RESTORE_STACK;\n      return OPUS_INTERNAL_ERROR;\n   }\n\n   /* Discard configs that were forced locally for the purpose of repacketization */\n   st->user_forced_mode = bak_mode;\n   st->user_bandwidth = bak_bandwidth;\n   st->force_channels = bak_channels;\n   st->silk_mode.toMono = bak_to_mono;\n\n   RESTORE_STACK;\n   return ret;\n}\n\nstatic int compute_redundancy_bytes(opus_int32 max_data_bytes, opus_int32 bitrate_bps, int frame_rate, int channels)\n{\n   int redundancy_bytes_cap;\n   int redundancy_bytes;\n   opus_int32 redundancy_rate;\n   int base_bits;\n   opus_int32 available_bits;\n   base_bits = (40*channels+20);\n\n   /* Equivalent rate for 5 ms frames. */\n   redundancy_rate = bitrate_bps + base_bits*(200 - frame_rate);\n   /* For VBR, further increase the bitrate if we can afford it. It's pretty short\n      and we'll avoid artefacts. */\n   redundancy_rate = 3*redundancy_rate/2;\n   redundancy_bytes = redundancy_rate/1600;\n\n   /* Compute the max rate we can use given CBR or VBR with cap. */\n   available_bits = max_data_bytes*8 - 2*base_bits;\n   redundancy_bytes_cap = (available_bits*240/(240+48000/frame_rate) + base_bits)/8;\n   redundancy_bytes = IMIN(redundancy_bytes, redundancy_bytes_cap);\n   /* It we can't get enough bits for redundancy to be worth it, rely on the decoder PLC. */\n   if (redundancy_bytes > 4 + 8*channels)\n      redundancy_bytes = IMIN(257, redundancy_bytes);\n   else\n      redundancy_bytes = 0;\n   return redundancy_bytes;\n}\n\nopus_int32 opus_encode_native(OpusEncoder *st, const opus_val16 *pcm, int frame_size,\n                unsigned char *data, opus_int32 out_data_bytes, int lsb_depth,\n                const void *analysis_pcm, opus_int32 analysis_size, int c1, int c2,\n                int analysis_channels, downmix_func downmix, int float_api)\n{\n    void *silk_enc;\n    CELTEncoder *celt_enc;\n    int i;\n    int ret=0;\n    opus_int32 nBytes;\n    ec_enc enc;\n    int bytes_target;\n    int prefill=0;\n    int start_band = 0;\n    int redundancy = 0;\n    int redundancy_bytes = 0; /* Number of bytes to use for redundancy frame */\n    int celt_to_silk = 0;\n    VARDECL(opus_val16, pcm_buf);\n    int nb_compr_bytes;\n    int to_celt = 0;\n    opus_uint32 redundant_rng = 0;\n    int cutoff_Hz, hp_freq_smth1;\n    int voice_est; /* Probability of voice in Q7 */\n    opus_int32 equiv_rate;\n    int delay_compensation;\n    int frame_rate;\n    opus_int32 max_rate; /* Max bitrate we're allowed to use */\n    int curr_bandwidth;\n    opus_val16 HB_gain;\n    opus_int32 max_data_bytes; /* Max number of bytes we're allowed to use */\n    int total_buffer;\n    opus_val16 stereo_width;\n    const CELTMode *celt_mode;\n#ifndef DISABLE_FLOAT_API\n    AnalysisInfo analysis_info;\n    int analysis_read_pos_bak=-1;\n    int analysis_read_subframe_bak=-1;\n    int is_silence = 0;\n#endif\n    VARDECL(opus_val16, tmp_prefill);\n\n    ALLOC_STACK;\n\n    max_data_bytes = IMIN(1276, out_data_bytes);\n\n    st->rangeFinal = 0;\n    if (frame_size <= 0 || max_data_bytes <= 0)\n    {\n       RESTORE_STACK;\n       return OPUS_BAD_ARG;\n    }\n\n    /* Cannot encode 100 ms in 1 byte */\n    if (max_data_bytes==1 && st->Fs==(frame_size*10))\n    {\n      RESTORE_STACK;\n      return OPUS_BUFFER_TOO_SMALL;\n    }\n\n    silk_enc = (char*)st+st->silk_enc_offset;\n    celt_enc = (CELTEncoder*)((char*)st+st->celt_enc_offset);\n    if (st->application == OPUS_APPLICATION_RESTRICTED_LOWDELAY)\n       delay_compensation = 0;\n    else\n       delay_compensation = st->delay_compensation;\n\n    lsb_depth = IMIN(lsb_depth, st->lsb_depth);\n\n    celt_encoder_ctl(celt_enc, CELT_GET_MODE(&celt_mode));\n#ifndef DISABLE_FLOAT_API\n    analysis_info.valid = 0;\n#ifdef FIXED_POINT\n    if (st->silk_mode.complexity >= 10 && st->Fs>=16000)\n#else\n    if (st->silk_mode.complexity >= 7 && st->Fs>=16000)\n#endif\n    {\n       if (is_digital_silence(pcm, frame_size, st->channels, lsb_depth))\n       {\n          is_silence = 1;\n       } else {\n          analysis_read_pos_bak = st->analysis.read_pos;\n          analysis_read_subframe_bak = st->analysis.read_subframe;\n          run_analysis(&st->analysis, celt_mode, analysis_pcm, analysis_size, frame_size,\n                c1, c2, analysis_channels, st->Fs,\n                lsb_depth, downmix, &analysis_info);\n       }\n\n       /* Track the peak signal energy */\n       if (!is_silence && analysis_info.activity_probability > DTX_ACTIVITY_THRESHOLD)\n          st->peak_signal_energy = MAX32(MULT16_32_Q15(QCONST16(0.999f, 15), st->peak_signal_energy),\n                compute_frame_energy(pcm, frame_size, st->channels, st->arch));\n    }\n#else\n    (void)analysis_pcm;\n    (void)analysis_size;\n    (void)c1;\n    (void)c2;\n    (void)analysis_channels;\n    (void)downmix;\n#endif\n\n#ifndef DISABLE_FLOAT_API\n    /* Reset voice_ratio if this frame is not silent or if analysis is disabled.\n     * Otherwise, preserve voice_ratio from the last non-silent frame */\n    if (!is_silence)\n      st->voice_ratio = -1;\n\n    st->detected_bandwidth = 0;\n    if (analysis_info.valid)\n    {\n       int analysis_bandwidth;\n       if (st->signal_type == OPUS_AUTO)\n          st->voice_ratio = (int)floor(.5+100*(1-analysis_info.music_prob));\n\n       analysis_bandwidth = analysis_info.bandwidth;\n       if (analysis_bandwidth<=12)\n          st->detected_bandwidth = OPUS_BANDWIDTH_NARROWBAND;\n       else if (analysis_bandwidth<=14)\n          st->detected_bandwidth = OPUS_BANDWIDTH_MEDIUMBAND;\n       else if (analysis_bandwidth<=16)\n          st->detected_bandwidth = OPUS_BANDWIDTH_WIDEBAND;\n       else if (analysis_bandwidth<=18)\n          st->detected_bandwidth = OPUS_BANDWIDTH_SUPERWIDEBAND;\n       else\n          st->detected_bandwidth = OPUS_BANDWIDTH_FULLBAND;\n    }\n#else\n    st->voice_ratio = -1;\n#endif\n\n    if (st->channels==2 && st->force_channels!=1)\n       stereo_width = compute_stereo_width(pcm, frame_size, st->Fs, &st->width_mem);\n    else\n       stereo_width = 0;\n    total_buffer = delay_compensation;\n    st->bitrate_bps = user_bitrate_to_bitrate(st, frame_size, max_data_bytes);\n\n    frame_rate = st->Fs/frame_size;\n    if (!st->use_vbr)\n    {\n       int cbrBytes;\n       /* Multiply by 12 to make sure the division is exact. */\n       int frame_rate12 = 12*st->Fs/frame_size;\n       /* We need to make sure that \"int\" values always fit in 16 bits. */\n       cbrBytes = IMIN( (12*st->bitrate_bps/8 + frame_rate12/2)/frame_rate12, max_data_bytes);\n       st->bitrate_bps = cbrBytes*(opus_int32)frame_rate12*8/12;\n       /* Make sure we provide at least one byte to avoid failing. */\n       max_data_bytes = IMAX(1, cbrBytes);\n    }\n    if (max_data_bytes<3 || st->bitrate_bps < 3*frame_rate*8\n       || (frame_rate<50 && (max_data_bytes*frame_rate<300 || st->bitrate_bps < 2400)))\n    {\n       /*If the space is too low to do something useful, emit 'PLC' frames.*/\n       int tocmode = st->mode;\n       int bw = st->bandwidth == 0 ? OPUS_BANDWIDTH_NARROWBAND : st->bandwidth;\n       int packet_code = 0;\n       int num_multiframes = 0;\n\n       if (tocmode==0)\n          tocmode = MODE_SILK_ONLY;\n       if (frame_rate>100)\n          tocmode = MODE_CELT_ONLY;\n       /* 40 ms -> 2 x 20 ms if in CELT_ONLY or HYBRID mode */\n       if (frame_rate==25 && tocmode!=MODE_SILK_ONLY)\n       {\n          frame_rate = 50;\n          packet_code = 1;\n       }\n\n       /* >= 60 ms frames */\n       if (frame_rate<=16)\n       {\n          /* 1 x 60 ms, 2 x 40 ms, 2 x 60 ms */\n          if (out_data_bytes==1 || (tocmode==MODE_SILK_ONLY && frame_rate!=10))\n          {\n             tocmode = MODE_SILK_ONLY;\n\n             packet_code = frame_rate <= 12;\n             frame_rate = frame_rate == 12 ? 25 : 16;\n          }\n          else\n          {\n             num_multiframes = 50/frame_rate;\n             frame_rate = 50;\n             packet_code = 3;\n          }\n       }\n\n       if(tocmode==MODE_SILK_ONLY&&bw>OPUS_BANDWIDTH_WIDEBAND)\n          bw=OPUS_BANDWIDTH_WIDEBAND;\n       else if (tocmode==MODE_CELT_ONLY&&bw==OPUS_BANDWIDTH_MEDIUMBAND)\n          bw=OPUS_BANDWIDTH_NARROWBAND;\n       else if (tocmode==MODE_HYBRID&&bw<=OPUS_BANDWIDTH_SUPERWIDEBAND)\n          bw=OPUS_BANDWIDTH_SUPERWIDEBAND;\n\n       data[0] = gen_toc(tocmode, frame_rate, bw, st->stream_channels);\n       data[0] |= packet_code;\n\n       ret = packet_code <= 1 ? 1 : 2;\n\n       max_data_bytes = IMAX(max_data_bytes, ret);\n\n       if (packet_code==3)\n          data[1] = num_multiframes;\n\n       if (!st->use_vbr)\n       {\n          ret = opus_packet_pad(data, ret, max_data_bytes);\n          if (ret == OPUS_OK)\n             ret = max_data_bytes;\n          else\n             ret = OPUS_INTERNAL_ERROR;\n       }\n       RESTORE_STACK;\n       return ret;\n    }\n    max_rate = frame_rate*max_data_bytes*8;\n\n    /* Equivalent 20-ms rate for mode/channel/bandwidth decisions */\n    equiv_rate = compute_equiv_rate(st->bitrate_bps, st->channels, st->Fs/frame_size,\n          st->use_vbr, 0, st->silk_mode.complexity, st->silk_mode.packetLossPercentage);\n\n    if (st->signal_type == OPUS_SIGNAL_VOICE)\n       voice_est = 127;\n    else if (st->signal_type == OPUS_SIGNAL_MUSIC)\n       voice_est = 0;\n    else if (st->voice_ratio >= 0)\n    {\n       voice_est = st->voice_ratio*327>>8;\n       /* For AUDIO, never be more than 90% confident of having speech */\n       if (st->application == OPUS_APPLICATION_AUDIO)\n          voice_est = IMIN(voice_est, 115);\n    } else if (st->application == OPUS_APPLICATION_VOIP)\n       voice_est = 115;\n    else\n       voice_est = 48;\n\n    if (st->force_channels!=OPUS_AUTO && st->channels == 2)\n    {\n        st->stream_channels = st->force_channels;\n    } else {\n#ifdef FUZZING\n       /* Random mono/stereo decision */\n       if (st->channels == 2 && (rand()&0x1F)==0)\n          st->stream_channels = 3-st->stream_channels;\n#else\n       /* Rate-dependent mono-stereo decision */\n       if (st->channels == 2)\n       {\n          opus_int32 stereo_threshold;\n          stereo_threshold = stereo_music_threshold + ((voice_est*voice_est*(stereo_voice_threshold-stereo_music_threshold))>>14);\n          if (st->stream_channels == 2)\n             stereo_threshold -= 1000;\n          else\n             stereo_threshold += 1000;\n          st->stream_channels = (equiv_rate > stereo_threshold) ? 2 : 1;\n       } else {\n          st->stream_channels = st->channels;\n       }\n#endif\n    }\n    /* Update equivalent rate for channels decision. */\n    equiv_rate = compute_equiv_rate(st->bitrate_bps, st->stream_channels, st->Fs/frame_size,\n          st->use_vbr, 0, st->silk_mode.complexity, st->silk_mode.packetLossPercentage);\n\n    /* Mode selection depending on application and signal type */\n    if (st->application == OPUS_APPLICATION_RESTRICTED_LOWDELAY)\n    {\n       st->mode = MODE_CELT_ONLY;\n    } else if (st->user_forced_mode == OPUS_AUTO)\n    {\n#ifdef FUZZING\n       /* Random mode switching */\n       if ((rand()&0xF)==0)\n       {\n          if ((rand()&0x1)==0)\n             st->mode = MODE_CELT_ONLY;\n          else\n             st->mode = MODE_SILK_ONLY;\n       } else {\n          if (st->prev_mode==MODE_CELT_ONLY)\n             st->mode = MODE_CELT_ONLY;\n          else\n             st->mode = MODE_SILK_ONLY;\n       }\n#else\n       opus_int32 mode_voice, mode_music;\n       opus_int32 threshold;\n\n       /* Interpolate based on stereo width */\n       mode_voice = (opus_int32)(MULT16_32_Q15(Q15ONE-stereo_width,mode_thresholds[0][0])\n             + MULT16_32_Q15(stereo_width,mode_thresholds[1][0]));\n       mode_music = (opus_int32)(MULT16_32_Q15(Q15ONE-stereo_width,mode_thresholds[1][1])\n             + MULT16_32_Q15(stereo_width,mode_thresholds[1][1]));\n       /* Interpolate based on speech/music probability */\n       threshold = mode_music + ((voice_est*voice_est*(mode_voice-mode_music))>>14);\n       /* Bias towards SILK for VoIP because of some useful features */\n       if (st->application == OPUS_APPLICATION_VOIP)\n          threshold += 8000;\n\n       /*printf(\"%f %d\\n\", stereo_width/(float)Q15ONE, threshold);*/\n       /* Hysteresis */\n       if (st->prev_mode == MODE_CELT_ONLY)\n           threshold -= 4000;\n       else if (st->prev_mode>0)\n           threshold += 4000;\n\n       st->mode = (equiv_rate >= threshold) ? MODE_CELT_ONLY: MODE_SILK_ONLY;\n\n       /* When FEC is enabled and there's enough packet loss, use SILK */\n       if (st->silk_mode.useInBandFEC && st->silk_mode.packetLossPercentage > (128-voice_est)>>4)\n          st->mode = MODE_SILK_ONLY;\n       /* When encoding voice and DTX is enabled but the generalized DTX cannot be used,\n          because of complexity and sampling frequency settings, switch to SILK DTX and\n          set the encoder to SILK mode */\n#ifndef DISABLE_FLOAT_API\n       st->silk_mode.useDTX = st->use_dtx && !(analysis_info.valid || is_silence);\n#else\n       st->silk_mode.useDTX = st->use_dtx;\n#endif\n       if (st->silk_mode.useDTX && voice_est > 100)\n          st->mode = MODE_SILK_ONLY;\n#endif\n\n       /* If max_data_bytes represents less than 6 kb/s, switch to CELT-only mode */\n       if (max_data_bytes < (frame_rate > 50 ? 9000 : 6000)*frame_size / (st->Fs * 8))\n          st->mode = MODE_CELT_ONLY;\n    } else {\n       st->mode = st->user_forced_mode;\n    }\n\n    /* Override the chosen mode to make sure we meet the requested frame size */\n    if (st->mode != MODE_CELT_ONLY && frame_size < st->Fs/100)\n       st->mode = MODE_CELT_ONLY;\n    if (st->lfe)\n       st->mode = MODE_CELT_ONLY;\n\n    if (st->prev_mode > 0 &&\n        ((st->mode != MODE_CELT_ONLY && st->prev_mode == MODE_CELT_ONLY) ||\n    (st->mode == MODE_CELT_ONLY && st->prev_mode != MODE_CELT_ONLY)))\n    {\n        redundancy = 1;\n        celt_to_silk = (st->mode != MODE_CELT_ONLY);\n        if (!celt_to_silk)\n        {\n            /* Switch to SILK/hybrid if frame size is 10 ms or more*/\n            if (frame_size >= st->Fs/100)\n            {\n                st->mode = st->prev_mode;\n                to_celt = 1;\n            } else {\n                redundancy=0;\n            }\n        }\n    }\n\n    /* When encoding multiframes, we can ask for a switch to CELT only in the last frame. This switch\n     * is processed above as the requested mode shouldn't interrupt stereo->mono transition. */\n    if (st->stream_channels == 1 && st->prev_channels ==2 && st->silk_mode.toMono==0\n          && st->mode != MODE_CELT_ONLY && st->prev_mode != MODE_CELT_ONLY)\n    {\n       /* Delay stereo->mono transition by two frames so that SILK can do a smooth downmix */\n       st->silk_mode.toMono = 1;\n       st->stream_channels = 2;\n    } else {\n       st->silk_mode.toMono = 0;\n    }\n\n    /* Update equivalent rate with mode decision. */\n    equiv_rate = compute_equiv_rate(st->bitrate_bps, st->stream_channels, st->Fs/frame_size,\n          st->use_vbr, st->mode, st->silk_mode.complexity, st->silk_mode.packetLossPercentage);\n\n    if (st->mode != MODE_CELT_ONLY && st->prev_mode == MODE_CELT_ONLY)\n    {\n        silk_EncControlStruct dummy;\n        silk_InitEncoder( silk_enc, st->arch, &dummy);\n        prefill=1;\n    }\n\n    /* Automatic (rate-dependent) bandwidth selection */\n    if (st->mode == MODE_CELT_ONLY || st->first || st->silk_mode.allowBandwidthSwitch)\n    {\n        const opus_int32 *voice_bandwidth_thresholds, *music_bandwidth_thresholds;\n        opus_int32 bandwidth_thresholds[8];\n        int bandwidth = OPUS_BANDWIDTH_FULLBAND;\n\n        if (st->channels==2 && st->force_channels!=1)\n        {\n           voice_bandwidth_thresholds = stereo_voice_bandwidth_thresholds;\n           music_bandwidth_thresholds = stereo_music_bandwidth_thresholds;\n        } else {\n           voice_bandwidth_thresholds = mono_voice_bandwidth_thresholds;\n           music_bandwidth_thresholds = mono_music_bandwidth_thresholds;\n        }\n        /* Interpolate bandwidth thresholds depending on voice estimation */\n        for (i=0;i<8;i++)\n        {\n           bandwidth_thresholds[i] = music_bandwidth_thresholds[i]\n                    + ((voice_est*voice_est*(voice_bandwidth_thresholds[i]-music_bandwidth_thresholds[i]))>>14);\n        }\n        do {\n            int threshold, hysteresis;\n            threshold = bandwidth_thresholds[2*(bandwidth-OPUS_BANDWIDTH_MEDIUMBAND)];\n            hysteresis = bandwidth_thresholds[2*(bandwidth-OPUS_BANDWIDTH_MEDIUMBAND)+1];\n            if (!st->first)\n            {\n                if (st->auto_bandwidth >= bandwidth)\n                    threshold -= hysteresis;\n                else\n                    threshold += hysteresis;\n            }\n            if (equiv_rate >= threshold)\n                break;\n        } while (--bandwidth>OPUS_BANDWIDTH_NARROWBAND);\n        st->bandwidth = st->auto_bandwidth = bandwidth;\n        /* Prevents any transition to SWB/FB until the SILK layer has fully\n           switched to WB mode and turned the variable LP filter off */\n        if (!st->first && st->mode != MODE_CELT_ONLY && !st->silk_mode.inWBmodeWithoutVariableLP && st->bandwidth > OPUS_BANDWIDTH_WIDEBAND)\n            st->bandwidth = OPUS_BANDWIDTH_WIDEBAND;\n    }\n\n    if (st->bandwidth>st->max_bandwidth)\n       st->bandwidth = st->max_bandwidth;\n\n    if (st->user_bandwidth != OPUS_AUTO)\n        st->bandwidth = st->user_bandwidth;\n\n    /* This prevents us from using hybrid at unsafe CBR/max rates */\n    if (st->mode != MODE_CELT_ONLY && max_rate < 15000)\n    {\n       st->bandwidth = IMIN(st->bandwidth, OPUS_BANDWIDTH_WIDEBAND);\n    }\n\n    /* Prevents Opus from wasting bits on frequencies that are above\n       the Nyquist rate of the input signal */\n    if (st->Fs <= 24000 && st->bandwidth > OPUS_BANDWIDTH_SUPERWIDEBAND)\n        st->bandwidth = OPUS_BANDWIDTH_SUPERWIDEBAND;\n    if (st->Fs <= 16000 && st->bandwidth > OPUS_BANDWIDTH_WIDEBAND)\n        st->bandwidth = OPUS_BANDWIDTH_WIDEBAND;\n    if (st->Fs <= 12000 && st->bandwidth > OPUS_BANDWIDTH_MEDIUMBAND)\n        st->bandwidth = OPUS_BANDWIDTH_MEDIUMBAND;\n    if (st->Fs <= 8000 && st->bandwidth > OPUS_BANDWIDTH_NARROWBAND)\n        st->bandwidth = OPUS_BANDWIDTH_NARROWBAND;\n#ifndef DISABLE_FLOAT_API\n    /* Use detected bandwidth to reduce the encoded bandwidth. */\n    if (st->detected_bandwidth && st->user_bandwidth == OPUS_AUTO)\n    {\n       int min_detected_bandwidth;\n       /* Makes bandwidth detection more conservative just in case the detector\n          gets it wrong when we could have coded a high bandwidth transparently.\n          When operating in SILK/hybrid mode, we don't go below wideband to avoid\n          more complicated switches that require redundancy. */\n       if (equiv_rate <= 18000*st->stream_channels && st->mode == MODE_CELT_ONLY)\n          min_detected_bandwidth = OPUS_BANDWIDTH_NARROWBAND;\n       else if (equiv_rate <= 24000*st->stream_channels && st->mode == MODE_CELT_ONLY)\n          min_detected_bandwidth = OPUS_BANDWIDTH_MEDIUMBAND;\n       else if (equiv_rate <= 30000*st->stream_channels)\n          min_detected_bandwidth = OPUS_BANDWIDTH_WIDEBAND;\n       else if (equiv_rate <= 44000*st->stream_channels)\n          min_detected_bandwidth = OPUS_BANDWIDTH_SUPERWIDEBAND;\n       else\n          min_detected_bandwidth = OPUS_BANDWIDTH_FULLBAND;\n\n       st->detected_bandwidth = IMAX(st->detected_bandwidth, min_detected_bandwidth);\n       st->bandwidth = IMIN(st->bandwidth, st->detected_bandwidth);\n    }\n#endif\n    st->silk_mode.LBRR_coded = decide_fec(st->silk_mode.useInBandFEC, st->silk_mode.packetLossPercentage,\n          st->silk_mode.LBRR_coded, st->mode, &st->bandwidth, equiv_rate);\n    celt_encoder_ctl(celt_enc, OPUS_SET_LSB_DEPTH(lsb_depth));\n\n    /* CELT mode doesn't support mediumband, use wideband instead */\n    if (st->mode == MODE_CELT_ONLY && st->bandwidth == OPUS_BANDWIDTH_MEDIUMBAND)\n        st->bandwidth = OPUS_BANDWIDTH_WIDEBAND;\n    if (st->lfe)\n       st->bandwidth = OPUS_BANDWIDTH_NARROWBAND;\n\n    curr_bandwidth = st->bandwidth;\n\n    /* Chooses the appropriate mode for speech\n       *NEVER* switch to/from CELT-only mode here as this will invalidate some assumptions */\n    if (st->mode == MODE_SILK_ONLY && curr_bandwidth > OPUS_BANDWIDTH_WIDEBAND)\n        st->mode = MODE_HYBRID;\n    if (st->mode == MODE_HYBRID && curr_bandwidth <= OPUS_BANDWIDTH_WIDEBAND)\n        st->mode = MODE_SILK_ONLY;\n\n    /* Can't support higher than >60 ms frames, and >20 ms when in Hybrid or CELT-only modes */\n    if ((frame_size > st->Fs/50 && (st->mode != MODE_SILK_ONLY)) || frame_size > 3*st->Fs/50)\n    {\n       int enc_frame_size;\n       int nb_frames;\n\n       if (st->mode == MODE_SILK_ONLY)\n       {\n         if (frame_size == 2*st->Fs/25)  /* 80 ms -> 2x 40 ms */\n           enc_frame_size = st->Fs/25;\n         else if (frame_size == 3*st->Fs/25)  /* 120 ms -> 2x 60 ms */\n           enc_frame_size = 3*st->Fs/50;\n         else                            /* 100 ms -> 5x 20 ms */\n           enc_frame_size = st->Fs/50;\n       }\n       else\n         enc_frame_size = st->Fs/50;\n\n       nb_frames = frame_size/enc_frame_size;\n\n#ifndef DISABLE_FLOAT_API\n       if (analysis_read_pos_bak!= -1)\n       {\n          st->analysis.read_pos = analysis_read_pos_bak;\n          st->analysis.read_subframe = analysis_read_subframe_bak;\n       }\n#endif\n\n       ret = encode_multiframe_packet(st, pcm, nb_frames, enc_frame_size, data,\n                                      out_data_bytes, to_celt, lsb_depth, float_api);\n\n       RESTORE_STACK;\n       return ret;\n    }\n\n    /* For the first frame at a new SILK bandwidth */\n    if (st->silk_bw_switch)\n    {\n       redundancy = 1;\n       celt_to_silk = 1;\n       st->silk_bw_switch = 0;\n       prefill=1;\n    }\n\n    /* If we decided to go with CELT, make sure redundancy is off, no matter what\n       we decided earlier. */\n    if (st->mode == MODE_CELT_ONLY)\n        redundancy = 0;\n\n    if (redundancy)\n    {\n       redundancy_bytes = compute_redundancy_bytes(max_data_bytes, st->bitrate_bps, frame_rate, st->stream_channels);\n       if (redundancy_bytes == 0)\n          redundancy = 0;\n    }\n\n    /* printf(\"%d %d %d %d\\n\", st->bitrate_bps, st->stream_channels, st->mode, curr_bandwidth); */\n    bytes_target = IMIN(max_data_bytes-redundancy_bytes, st->bitrate_bps * frame_size / (st->Fs * 8)) - 1;\n\n    data += 1;\n\n    ec_enc_init(&enc, data, max_data_bytes-1);\n\n    ALLOC(pcm_buf, (total_buffer+frame_size)*st->channels, opus_val16);\n    OPUS_COPY(pcm_buf, &st->delay_buffer[(st->encoder_buffer-total_buffer)*st->channels], total_buffer*st->channels);\n\n    if (st->mode == MODE_CELT_ONLY)\n       hp_freq_smth1 = silk_LSHIFT( silk_lin2log( VARIABLE_HP_MIN_CUTOFF_HZ ), 8 );\n    else\n       hp_freq_smth1 = ((silk_encoder*)silk_enc)->state_Fxx[0].sCmn.variable_HP_smth1_Q15;\n\n    st->variable_HP_smth2_Q15 = silk_SMLAWB( st->variable_HP_smth2_Q15,\n          hp_freq_smth1 - st->variable_HP_smth2_Q15, SILK_FIX_CONST( VARIABLE_HP_SMTH_COEF2, 16 ) );\n\n    /* convert from log scale to Hertz */\n    cutoff_Hz = silk_log2lin( silk_RSHIFT( st->variable_HP_smth2_Q15, 8 ) );\n\n    if (st->application == OPUS_APPLICATION_VOIP)\n    {\n       hp_cutoff(pcm, cutoff_Hz, &pcm_buf[total_buffer*st->channels], st->hp_mem, frame_size, st->channels, st->Fs, st->arch);\n    } else {\n       dc_reject(pcm, 3, &pcm_buf[total_buffer*st->channels], st->hp_mem, frame_size, st->channels, st->Fs);\n    }\n#ifndef FIXED_POINT\n    if (float_api)\n    {\n       opus_val32 sum;\n       sum = celt_inner_prod(&pcm_buf[total_buffer*st->channels], &pcm_buf[total_buffer*st->channels], frame_size*st->channels, st->arch);\n       /* This should filter out both NaNs and ridiculous signals that could\n          cause NaNs further down. */\n       if (!(sum < 1e9f) || celt_isnan(sum))\n       {\n          OPUS_CLEAR(&pcm_buf[total_buffer*st->channels], frame_size*st->channels);\n          st->hp_mem[0] = st->hp_mem[1] = st->hp_mem[2] = st->hp_mem[3] = 0;\n       }\n    }\n#endif\n\n\n    /* SILK processing */\n    HB_gain = Q15ONE;\n    if (st->mode != MODE_CELT_ONLY)\n    {\n        opus_int32 total_bitRate, celt_rate;\n#ifdef FIXED_POINT\n       const opus_int16 *pcm_silk;\n#else\n       VARDECL(opus_int16, pcm_silk);\n       ALLOC(pcm_silk, st->channels*frame_size, opus_int16);\n#endif\n\n        /* Distribute bits between SILK and CELT */\n        total_bitRate = 8 * bytes_target * frame_rate;\n        if( st->mode == MODE_HYBRID ) {\n            /* Base rate for SILK */\n            st->silk_mode.bitRate = compute_silk_rate_for_hybrid(total_bitRate,\n                  curr_bandwidth, st->Fs == 50 * frame_size, st->use_vbr, st->silk_mode.LBRR_coded);\n            if (!st->energy_masking)\n            {\n               /* Increasingly attenuate high band when it gets allocated fewer bits */\n               celt_rate = total_bitRate - st->silk_mode.bitRate;\n               HB_gain = Q15ONE - SHR32(celt_exp2(-celt_rate * QCONST16(1.f/1024, 10)), 1);\n            }\n        } else {\n            /* SILK gets all bits */\n            st->silk_mode.bitRate = total_bitRate;\n        }\n\n        /* Surround masking for SILK */\n        if (st->energy_masking && st->use_vbr && !st->lfe)\n        {\n           opus_val32 mask_sum=0;\n           opus_val16 masking_depth;\n           opus_int32 rate_offset;\n           int c;\n           int end = 17;\n           opus_int16 srate = 16000;\n           if (st->bandwidth == OPUS_BANDWIDTH_NARROWBAND)\n           {\n              end = 13;\n              srate = 8000;\n           } else if (st->bandwidth == OPUS_BANDWIDTH_MEDIUMBAND)\n           {\n              end = 15;\n              srate = 12000;\n           }\n           for (c=0;c<st->channels;c++)\n           {\n              for(i=0;i<end;i++)\n              {\n                 opus_val16 mask;\n                 mask = MAX16(MIN16(st->energy_masking[21*c+i],\n                        QCONST16(.5f, DB_SHIFT)), -QCONST16(2.0f, DB_SHIFT));\n                 if (mask > 0)\n                    mask = HALF16(mask);\n                 mask_sum += mask;\n              }\n           }\n           /* Conservative rate reduction, we cut the masking in half */\n           masking_depth = mask_sum / end*st->channels;\n           masking_depth += QCONST16(.2f, DB_SHIFT);\n           rate_offset = (opus_int32)PSHR32(MULT16_16(srate, masking_depth), DB_SHIFT);\n           rate_offset = MAX32(rate_offset, -2*st->silk_mode.bitRate/3);\n           /* Split the rate change between the SILK and CELT part for hybrid. */\n           if (st->bandwidth==OPUS_BANDWIDTH_SUPERWIDEBAND || st->bandwidth==OPUS_BANDWIDTH_FULLBAND)\n              st->silk_mode.bitRate += 3*rate_offset/5;\n           else\n              st->silk_mode.bitRate += rate_offset;\n        }\n\n        st->silk_mode.payloadSize_ms = 1000 * frame_size / st->Fs;\n        st->silk_mode.nChannelsAPI = st->channels;\n        st->silk_mode.nChannelsInternal = st->stream_channels;\n        if (curr_bandwidth == OPUS_BANDWIDTH_NARROWBAND) {\n            st->silk_mode.desiredInternalSampleRate = 8000;\n        } else if (curr_bandwidth == OPUS_BANDWIDTH_MEDIUMBAND) {\n            st->silk_mode.desiredInternalSampleRate = 12000;\n        } else {\n            silk_assert( st->mode == MODE_HYBRID || curr_bandwidth == OPUS_BANDWIDTH_WIDEBAND );\n            st->silk_mode.desiredInternalSampleRate = 16000;\n        }\n        if( st->mode == MODE_HYBRID ) {\n            /* Don't allow bandwidth reduction at lowest bitrates in hybrid mode */\n            st->silk_mode.minInternalSampleRate = 16000;\n        } else {\n            st->silk_mode.minInternalSampleRate = 8000;\n        }\n\n        st->silk_mode.maxInternalSampleRate = 16000;\n        if (st->mode == MODE_SILK_ONLY)\n        {\n           opus_int32 effective_max_rate = max_rate;\n           if (frame_rate > 50)\n              effective_max_rate = effective_max_rate*2/3;\n           if (effective_max_rate < 8000)\n           {\n              st->silk_mode.maxInternalSampleRate = 12000;\n              st->silk_mode.desiredInternalSampleRate = IMIN(12000, st->silk_mode.desiredInternalSampleRate);\n           }\n           if (effective_max_rate < 7000)\n           {\n              st->silk_mode.maxInternalSampleRate = 8000;\n              st->silk_mode.desiredInternalSampleRate = IMIN(8000, st->silk_mode.desiredInternalSampleRate);\n           }\n        }\n\n        st->silk_mode.useCBR = !st->use_vbr;\n\n        /* Call SILK encoder for the low band */\n\n        /* Max bits for SILK, counting ToC, redundancy bytes, and optionally redundancy. */\n        st->silk_mode.maxBits = (max_data_bytes-1)*8;\n        if (redundancy && redundancy_bytes >= 2)\n        {\n           /* Counting 1 bit for redundancy position and 20 bits for flag+size (only for hybrid). */\n           st->silk_mode.maxBits -= redundancy_bytes*8 + 1;\n           if (st->mode == MODE_HYBRID)\n              st->silk_mode.maxBits -= 20;\n        }\n        if (st->silk_mode.useCBR)\n        {\n           if (st->mode == MODE_HYBRID)\n           {\n              st->silk_mode.maxBits = IMIN(st->silk_mode.maxBits, st->silk_mode.bitRate * frame_size / st->Fs);\n           }\n        } else {\n           /* Constrained VBR. */\n           if (st->mode == MODE_HYBRID)\n           {\n              /* Compute SILK bitrate corresponding to the max total bits available */\n              opus_int32 maxBitRate = compute_silk_rate_for_hybrid(st->silk_mode.maxBits*st->Fs / frame_size,\n                    curr_bandwidth, st->Fs == 50 * frame_size, st->use_vbr, st->silk_mode.LBRR_coded);\n              st->silk_mode.maxBits = maxBitRate * frame_size / st->Fs;\n           }\n        }\n\n        if (prefill)\n        {\n            opus_int32 zero=0;\n            int prefill_offset;\n            /* Use a smooth onset for the SILK prefill to avoid the encoder trying to encode\n               a discontinuity. The exact location is what we need to avoid leaving any \"gap\"\n               in the audio when mixing with the redundant CELT frame. Here we can afford to\n               overwrite st->delay_buffer because the only thing that uses it before it gets\n               rewritten is tmp_prefill[] and even then only the part after the ramp really\n               gets used (rather than sent to the encoder and discarded) */\n            prefill_offset = st->channels*(st->encoder_buffer-st->delay_compensation-st->Fs/400);\n            gain_fade(st->delay_buffer+prefill_offset, st->delay_buffer+prefill_offset,\n                  0, Q15ONE, celt_mode->overlap, st->Fs/400, st->channels, celt_mode->window, st->Fs);\n            OPUS_CLEAR(st->delay_buffer, prefill_offset);\n#ifdef FIXED_POINT\n            pcm_silk = st->delay_buffer;\n#else\n            for (i=0;i<st->encoder_buffer*st->channels;i++)\n                pcm_silk[i] = FLOAT2INT16(st->delay_buffer[i]);\n#endif\n            silk_Encode( silk_enc, &st->silk_mode, pcm_silk, st->encoder_buffer, NULL, &zero, 1 );\n        }\n\n#ifdef FIXED_POINT\n        pcm_silk = pcm_buf+total_buffer*st->channels;\n#else\n        for (i=0;i<frame_size*st->channels;i++)\n            pcm_silk[i] = FLOAT2INT16(pcm_buf[total_buffer*st->channels + i]);\n#endif\n        ret = silk_Encode( silk_enc, &st->silk_mode, pcm_silk, frame_size, &enc, &nBytes, 0 );\n        if( ret ) {\n            /*fprintf (stderr, \"SILK encode error: %d\\n\", ret);*/\n            /* Handle error */\n           RESTORE_STACK;\n           return OPUS_INTERNAL_ERROR;\n        }\n\n        /* Extract SILK internal bandwidth for signaling in first byte */\n        if( st->mode == MODE_SILK_ONLY ) {\n            if( st->silk_mode.internalSampleRate == 8000 ) {\n               curr_bandwidth = OPUS_BANDWIDTH_NARROWBAND;\n            } else if( st->silk_mode.internalSampleRate == 12000 ) {\n               curr_bandwidth = OPUS_BANDWIDTH_MEDIUMBAND;\n            } else if( st->silk_mode.internalSampleRate == 16000 ) {\n               curr_bandwidth = OPUS_BANDWIDTH_WIDEBAND;\n            }\n        } else {\n            silk_assert( st->silk_mode.internalSampleRate == 16000 );\n        }\n\n        st->silk_mode.opusCanSwitch = st->silk_mode.switchReady && !st->nonfinal_frame;\n\n        if (nBytes==0)\n        {\n           st->rangeFinal = 0;\n           data[-1] = gen_toc(st->mode, st->Fs/frame_size, curr_bandwidth, st->stream_channels);\n           RESTORE_STACK;\n           return 1;\n        }\n\n        /* FIXME: How do we allocate the redundancy for CBR? */\n        if (st->silk_mode.opusCanSwitch)\n        {\n           redundancy_bytes = compute_redundancy_bytes(max_data_bytes, st->bitrate_bps, frame_rate, st->stream_channels);\n           redundancy = (redundancy_bytes != 0);\n           celt_to_silk = 0;\n           st->silk_bw_switch = 1;\n        }\n    }\n\n    /* CELT processing */\n    {\n        int endband=21;\n\n        switch(curr_bandwidth)\n        {\n            case OPUS_BANDWIDTH_NARROWBAND:\n                endband = 13;\n                break;\n            case OPUS_BANDWIDTH_MEDIUMBAND:\n            case OPUS_BANDWIDTH_WIDEBAND:\n                endband = 17;\n                break;\n            case OPUS_BANDWIDTH_SUPERWIDEBAND:\n                endband = 19;\n                break;\n            case OPUS_BANDWIDTH_FULLBAND:\n                endband = 21;\n                break;\n        }\n        celt_encoder_ctl(celt_enc, CELT_SET_END_BAND(endband));\n        celt_encoder_ctl(celt_enc, CELT_SET_CHANNELS(st->stream_channels));\n    }\n    celt_encoder_ctl(celt_enc, OPUS_SET_BITRATE(OPUS_BITRATE_MAX));\n    if (st->mode != MODE_SILK_ONLY)\n    {\n        opus_val32 celt_pred=2;\n        celt_encoder_ctl(celt_enc, OPUS_SET_VBR(0));\n        /* We may still decide to disable prediction later */\n        if (st->silk_mode.reducedDependency)\n           celt_pred = 0;\n        celt_encoder_ctl(celt_enc, CELT_SET_PREDICTION(celt_pred));\n\n        if (st->mode == MODE_HYBRID)\n        {\n            if( st->use_vbr ) {\n                celt_encoder_ctl(celt_enc, OPUS_SET_BITRATE(st->bitrate_bps-st->silk_mode.bitRate));\n                celt_encoder_ctl(celt_enc, OPUS_SET_VBR_CONSTRAINT(0));\n            }\n        } else {\n            if (st->use_vbr)\n            {\n                celt_encoder_ctl(celt_enc, OPUS_SET_VBR(1));\n                celt_encoder_ctl(celt_enc, OPUS_SET_VBR_CONSTRAINT(st->vbr_constraint));\n                celt_encoder_ctl(celt_enc, OPUS_SET_BITRATE(st->bitrate_bps));\n            }\n        }\n    }\n\n    ALLOC(tmp_prefill, st->channels*st->Fs/400, opus_val16);\n    if (st->mode != MODE_SILK_ONLY && st->mode != st->prev_mode && st->prev_mode > 0)\n    {\n       OPUS_COPY(tmp_prefill, &st->delay_buffer[(st->encoder_buffer-total_buffer-st->Fs/400)*st->channels], st->channels*st->Fs/400);\n    }\n\n    if (st->channels*(st->encoder_buffer-(frame_size+total_buffer)) > 0)\n    {\n       OPUS_MOVE(st->delay_buffer, &st->delay_buffer[st->channels*frame_size], st->channels*(st->encoder_buffer-frame_size-total_buffer));\n       OPUS_COPY(&st->delay_buffer[st->channels*(st->encoder_buffer-frame_size-total_buffer)],\n             &pcm_buf[0],\n             (frame_size+total_buffer)*st->channels);\n    } else {\n       OPUS_COPY(st->delay_buffer, &pcm_buf[(frame_size+total_buffer-st->encoder_buffer)*st->channels], st->encoder_buffer*st->channels);\n    }\n    /* gain_fade() and stereo_fade() need to be after the buffer copying\n       because we don't want any of this to affect the SILK part */\n    if( st->prev_HB_gain < Q15ONE || HB_gain < Q15ONE ) {\n       gain_fade(pcm_buf, pcm_buf,\n             st->prev_HB_gain, HB_gain, celt_mode->overlap, frame_size, st->channels, celt_mode->window, st->Fs);\n    }\n    st->prev_HB_gain = HB_gain;\n    if (st->mode != MODE_HYBRID || st->stream_channels==1)\n       st->silk_mode.stereoWidth_Q14 = IMIN((1<<14),2*IMAX(0,equiv_rate-24000));\n    if( !st->energy_masking && st->channels == 2 ) {\n        /* Apply stereo width reduction (at low bitrates) */\n        if( st->hybrid_stereo_width_Q14 < (1 << 14) || st->silk_mode.stereoWidth_Q14 < (1 << 14) ) {\n            opus_val16 g1, g2;\n            g1 = st->hybrid_stereo_width_Q14;\n            g2 = (opus_val16)(st->silk_mode.stereoWidth_Q14);\n#ifdef FIXED_POINT\n            g1 = g1==16384 ? Q15ONE : SHL16(g1,1);\n            g2 = g2==16384 ? Q15ONE : SHL16(g2,1);\n#else\n            g1 *= (1.f/16384);\n            g2 *= (1.f/16384);\n#endif\n            stereo_fade(pcm_buf, pcm_buf, g1, g2, celt_mode->overlap,\n                  frame_size, st->channels, celt_mode->window, st->Fs);\n            st->hybrid_stereo_width_Q14 = st->silk_mode.stereoWidth_Q14;\n        }\n    }\n\n    if ( st->mode != MODE_CELT_ONLY && ec_tell(&enc)+17+20*(st->mode == MODE_HYBRID) <= 8*(max_data_bytes-1))\n    {\n        /* For SILK mode, the redundancy is inferred from the length */\n        if (st->mode == MODE_HYBRID)\n           ec_enc_bit_logp(&enc, redundancy, 12);\n        if (redundancy)\n        {\n            int max_redundancy;\n            ec_enc_bit_logp(&enc, celt_to_silk, 1);\n            if (st->mode == MODE_HYBRID)\n            {\n               /* Reserve the 8 bits needed for the redundancy length,\n                  and at least a few bits for CELT if possible */\n               max_redundancy = (max_data_bytes-1)-((ec_tell(&enc)+8+3+7)>>3);\n            }\n            else\n               max_redundancy = (max_data_bytes-1)-((ec_tell(&enc)+7)>>3);\n            /* Target the same bit-rate for redundancy as for the rest,\n               up to a max of 257 bytes */\n            redundancy_bytes = IMIN(max_redundancy, redundancy_bytes);\n            redundancy_bytes = IMIN(257, IMAX(2, redundancy_bytes));\n            if (st->mode == MODE_HYBRID)\n                ec_enc_uint(&enc, redundancy_bytes-2, 256);\n        }\n    } else {\n        redundancy = 0;\n    }\n\n    if (!redundancy)\n    {\n       st->silk_bw_switch = 0;\n       redundancy_bytes = 0;\n    }\n    if (st->mode != MODE_CELT_ONLY)start_band=17;\n\n    if (st->mode == MODE_SILK_ONLY)\n    {\n        ret = (ec_tell(&enc)+7)>>3;\n        ec_enc_done(&enc);\n        nb_compr_bytes = ret;\n    } else {\n       nb_compr_bytes = (max_data_bytes-1)-redundancy_bytes;\n       ec_enc_shrink(&enc, nb_compr_bytes);\n    }\n\n#ifndef DISABLE_FLOAT_API\n    if (redundancy || st->mode != MODE_SILK_ONLY)\n       celt_encoder_ctl(celt_enc, CELT_SET_ANALYSIS(&analysis_info));\n#endif\n    if (st->mode == MODE_HYBRID) {\n       SILKInfo info;\n       info.signalType = st->silk_mode.signalType;\n       info.offset = st->silk_mode.offset;\n       celt_encoder_ctl(celt_enc, CELT_SET_SILK_INFO(&info));\n    } else {\n       celt_encoder_ctl(celt_enc, CELT_SET_SILK_INFO((SILKInfo*)NULL));\n    }\n\n    /* 5 ms redundant frame for CELT->SILK */\n    if (redundancy && celt_to_silk)\n    {\n        int err;\n        celt_encoder_ctl(celt_enc, CELT_SET_START_BAND(0));\n        celt_encoder_ctl(celt_enc, OPUS_SET_VBR(0));\n        celt_encoder_ctl(celt_enc, OPUS_SET_BITRATE(OPUS_BITRATE_MAX));\n        err = celt_encode_with_ec(celt_enc, pcm_buf, st->Fs/200, data+nb_compr_bytes, redundancy_bytes, NULL);\n        if (err < 0)\n        {\n           RESTORE_STACK;\n           return OPUS_INTERNAL_ERROR;\n        }\n        celt_encoder_ctl(celt_enc, OPUS_GET_FINAL_RANGE(&redundant_rng));\n        celt_encoder_ctl(celt_enc, OPUS_RESET_STATE);\n    }\n\n    celt_encoder_ctl(celt_enc, CELT_SET_START_BAND(start_band));\n\n    if (st->mode != MODE_SILK_ONLY)\n    {\n        if (st->mode != st->prev_mode && st->prev_mode > 0)\n        {\n           unsigned char dummy[2];\n           celt_encoder_ctl(celt_enc, OPUS_RESET_STATE);\n\n           /* Prefilling */\n           celt_encode_with_ec(celt_enc, tmp_prefill, st->Fs/400, dummy, 2, NULL);\n           celt_encoder_ctl(celt_enc, CELT_SET_PREDICTION(0));\n        }\n        /* If false, we already busted the budget and we'll end up with a \"PLC frame\" */\n        if (ec_tell(&enc) <= 8*nb_compr_bytes)\n        {\n           /* Set the bitrate again if it was overridden in the redundancy code above*/\n           if (redundancy && celt_to_silk && st->mode==MODE_HYBRID && st->use_vbr)\n              celt_encoder_ctl(celt_enc, OPUS_SET_BITRATE(st->bitrate_bps-st->silk_mode.bitRate));\n           celt_encoder_ctl(celt_enc, OPUS_SET_VBR(st->use_vbr));\n           ret = celt_encode_with_ec(celt_enc, pcm_buf, frame_size, NULL, nb_compr_bytes, &enc);\n           if (ret < 0)\n           {\n              RESTORE_STACK;\n              return OPUS_INTERNAL_ERROR;\n           }\n           /* Put CELT->SILK redundancy data in the right place. */\n           if (redundancy && celt_to_silk && st->mode==MODE_HYBRID && st->use_vbr)\n           {\n              OPUS_MOVE(data+ret, data+nb_compr_bytes, redundancy_bytes);\n              nb_compr_bytes = nb_compr_bytes+redundancy_bytes;\n           }\n        }\n    }\n\n    /* 5 ms redundant frame for SILK->CELT */\n    if (redundancy && !celt_to_silk)\n    {\n        int err;\n        unsigned char dummy[2];\n        int N2, N4;\n        N2 = st->Fs/200;\n        N4 = st->Fs/400;\n\n        celt_encoder_ctl(celt_enc, OPUS_RESET_STATE);\n        celt_encoder_ctl(celt_enc, CELT_SET_START_BAND(0));\n        celt_encoder_ctl(celt_enc, CELT_SET_PREDICTION(0));\n        celt_encoder_ctl(celt_enc, OPUS_SET_VBR(0));\n        celt_encoder_ctl(celt_enc, OPUS_SET_BITRATE(OPUS_BITRATE_MAX));\n\n        if (st->mode == MODE_HYBRID)\n        {\n           /* Shrink packet to what the encoder actually used. */\n           nb_compr_bytes = ret;\n           ec_enc_shrink(&enc, nb_compr_bytes);\n        }\n        /* NOTE: We could speed this up slightly (at the expense of code size) by just adding a function that prefills the buffer */\n        celt_encode_with_ec(celt_enc, pcm_buf+st->channels*(frame_size-N2-N4), N4, dummy, 2, NULL);\n\n        err = celt_encode_with_ec(celt_enc, pcm_buf+st->channels*(frame_size-N2), N2, data+nb_compr_bytes, redundancy_bytes, NULL);\n        if (err < 0)\n        {\n           RESTORE_STACK;\n           return OPUS_INTERNAL_ERROR;\n        }\n        celt_encoder_ctl(celt_enc, OPUS_GET_FINAL_RANGE(&redundant_rng));\n    }\n\n\n\n    /* Signalling the mode in the first byte */\n    data--;\n    data[0] = gen_toc(st->mode, st->Fs/frame_size, curr_bandwidth, st->stream_channels);\n\n    st->rangeFinal = enc.rng ^ redundant_rng;\n\n    if (to_celt)\n        st->prev_mode = MODE_CELT_ONLY;\n    else\n        st->prev_mode = st->mode;\n    st->prev_channels = st->stream_channels;\n    st->prev_framesize = frame_size;\n\n    st->first = 0;\n\n    /* DTX decision */\n#ifndef DISABLE_FLOAT_API\n    if (st->use_dtx && (analysis_info.valid || is_silence))\n    {\n       if (decide_dtx_mode(analysis_info.activity_probability, &st->nb_no_activity_frames,\n             st->peak_signal_energy, pcm, frame_size, st->channels, is_silence, st->arch))\n       {\n          st->rangeFinal = 0;\n          data[0] = gen_toc(st->mode, st->Fs/frame_size, curr_bandwidth, st->stream_channels);\n          RESTORE_STACK;\n          return 1;\n       }\n    }\n#endif\n\n    /* In the unlikely case that the SILK encoder busted its target, tell\n       the decoder to call the PLC */\n    if (ec_tell(&enc) > (max_data_bytes-1)*8)\n    {\n       if (max_data_bytes < 2)\n       {\n          RESTORE_STACK;\n          return OPUS_BUFFER_TOO_SMALL;\n       }\n       data[1] = 0;\n       ret = 1;\n       st->rangeFinal = 0;\n    } else if (st->mode==MODE_SILK_ONLY&&!redundancy)\n    {\n       /*When in LPC only mode it's perfectly\n         reasonable to strip off trailing zero bytes as\n         the required range decoder behavior is to\n         fill these in. This can't be done when the MDCT\n         modes are used because the decoder needs to know\n         the actual length for allocation purposes.*/\n       while(ret>2&&data[ret]==0)ret--;\n    }\n    /* Count ToC and redundancy */\n    ret += 1+redundancy_bytes;\n    if (!st->use_vbr)\n    {\n       if (opus_packet_pad(data, ret, max_data_bytes) != OPUS_OK)\n       {\n          RESTORE_STACK;\n          return OPUS_INTERNAL_ERROR;\n       }\n       ret = max_data_bytes;\n    }\n    RESTORE_STACK;\n    return ret;\n}\n\n#ifdef FIXED_POINT\n\n#ifndef DISABLE_FLOAT_API\nopus_int32 opus_encode_float(OpusEncoder *st, const float *pcm, int analysis_frame_size,\n      unsigned char *data, opus_int32 max_data_bytes)\n{\n   int i, ret;\n   int frame_size;\n   VARDECL(opus_int16, in);\n   ALLOC_STACK;\n\n   frame_size = frame_size_select(analysis_frame_size, st->variable_duration, st->Fs);\n   if (frame_size <= 0)\n   {\n      RESTORE_STACK;\n      return OPUS_BAD_ARG;\n   }\n   ALLOC(in, frame_size*st->channels, opus_int16);\n\n   for (i=0;i<frame_size*st->channels;i++)\n      in[i] = FLOAT2INT16(pcm[i]);\n   ret = opus_encode_native(st, in, frame_size, data, max_data_bytes, 16,\n                            pcm, analysis_frame_size, 0, -2, st->channels, downmix_float, 1);\n   RESTORE_STACK;\n   return ret;\n}\n#endif\n\nopus_int32 opus_encode(OpusEncoder *st, const opus_int16 *pcm, int analysis_frame_size,\n                unsigned char *data, opus_int32 out_data_bytes)\n{\n   int frame_size;\n   frame_size = frame_size_select(analysis_frame_size, st->variable_duration, st->Fs);\n   return opus_encode_native(st, pcm, frame_size, data, out_data_bytes, 16,\n                             pcm, analysis_frame_size, 0, -2, st->channels, downmix_int, 0);\n}\n\n#else\nopus_int32 opus_encode(OpusEncoder *st, const opus_int16 *pcm, int analysis_frame_size,\n      unsigned char *data, opus_int32 max_data_bytes)\n{\n   int i, ret;\n   int frame_size;\n   VARDECL(float, in);\n   ALLOC_STACK;\n\n   frame_size = frame_size_select(analysis_frame_size, st->variable_duration, st->Fs);\n   if (frame_size <= 0)\n   {\n      RESTORE_STACK;\n      return OPUS_BAD_ARG;\n   }\n   ALLOC(in, frame_size*st->channels, float);\n\n   for (i=0;i<frame_size*st->channels;i++)\n      in[i] = (1.0f/32768)*pcm[i];\n   ret = opus_encode_native(st, in, frame_size, data, max_data_bytes, 16,\n                            pcm, analysis_frame_size, 0, -2, st->channels, downmix_int, 0);\n   RESTORE_STACK;\n   return ret;\n}\nopus_int32 opus_encode_float(OpusEncoder *st, const float *pcm, int analysis_frame_size,\n                      unsigned char *data, opus_int32 out_data_bytes)\n{\n   int frame_size;\n   frame_size = frame_size_select(analysis_frame_size, st->variable_duration, st->Fs);\n   return opus_encode_native(st, pcm, frame_size, data, out_data_bytes, 24,\n                             pcm, analysis_frame_size, 0, -2, st->channels, downmix_float, 1);\n}\n#endif\n\n\nint opus_encoder_ctl(OpusEncoder *st, int request, ...)\n{\n    int ret;\n    CELTEncoder *celt_enc;\n    va_list ap;\n\n    ret = OPUS_OK;\n    va_start(ap, request);\n\n    celt_enc = (CELTEncoder*)((char*)st+st->celt_enc_offset);\n\n    switch (request)\n    {\n        case OPUS_SET_APPLICATION_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if (   (value != OPUS_APPLICATION_VOIP && value != OPUS_APPLICATION_AUDIO\n                 && value != OPUS_APPLICATION_RESTRICTED_LOWDELAY)\n               || (!st->first && st->application != value))\n            {\n               ret = OPUS_BAD_ARG;\n               break;\n            }\n            st->application = value;\n#ifndef DISABLE_FLOAT_API\n            st->analysis.application = value;\n#endif\n        }\n        break;\n        case OPUS_GET_APPLICATION_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->application;\n        }\n        break;\n        case OPUS_SET_BITRATE_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if (value != OPUS_AUTO && value != OPUS_BITRATE_MAX)\n            {\n                if (value <= 0)\n                    goto bad_arg;\n                else if (value <= 500)\n                    value = 500;\n                else if (value > (opus_int32)300000*st->channels)\n                    value = (opus_int32)300000*st->channels;\n            }\n            st->user_bitrate_bps = value;\n        }\n        break;\n        case OPUS_GET_BITRATE_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = user_bitrate_to_bitrate(st, st->prev_framesize, 1276);\n        }\n        break;\n        case OPUS_SET_FORCE_CHANNELS_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if((value<1 || value>st->channels) && value != OPUS_AUTO)\n            {\n               goto bad_arg;\n            }\n            st->force_channels = value;\n        }\n        break;\n        case OPUS_GET_FORCE_CHANNELS_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->force_channels;\n        }\n        break;\n        case OPUS_SET_MAX_BANDWIDTH_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if (value < OPUS_BANDWIDTH_NARROWBAND || value > OPUS_BANDWIDTH_FULLBAND)\n            {\n               goto bad_arg;\n            }\n            st->max_bandwidth = value;\n            if (st->max_bandwidth == OPUS_BANDWIDTH_NARROWBAND) {\n                st->silk_mode.maxInternalSampleRate = 8000;\n            } else if (st->max_bandwidth == OPUS_BANDWIDTH_MEDIUMBAND) {\n                st->silk_mode.maxInternalSampleRate = 12000;\n            } else {\n                st->silk_mode.maxInternalSampleRate = 16000;\n            }\n        }\n        break;\n        case OPUS_GET_MAX_BANDWIDTH_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->max_bandwidth;\n        }\n        break;\n        case OPUS_SET_BANDWIDTH_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if ((value < OPUS_BANDWIDTH_NARROWBAND || value > OPUS_BANDWIDTH_FULLBAND) && value != OPUS_AUTO)\n            {\n               goto bad_arg;\n            }\n            st->user_bandwidth = value;\n            if (st->user_bandwidth == OPUS_BANDWIDTH_NARROWBAND) {\n                st->silk_mode.maxInternalSampleRate = 8000;\n            } else if (st->user_bandwidth == OPUS_BANDWIDTH_MEDIUMBAND) {\n                st->silk_mode.maxInternalSampleRate = 12000;\n            } else {\n                st->silk_mode.maxInternalSampleRate = 16000;\n            }\n        }\n        break;\n        case OPUS_GET_BANDWIDTH_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->bandwidth;\n        }\n        break;\n        case OPUS_SET_DTX_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if(value<0 || value>1)\n            {\n               goto bad_arg;\n            }\n            st->use_dtx = value;\n        }\n        break;\n        case OPUS_GET_DTX_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->use_dtx;\n        }\n        break;\n        case OPUS_SET_COMPLEXITY_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if(value<0 || value>10)\n            {\n               goto bad_arg;\n            }\n            st->silk_mode.complexity = value;\n            celt_encoder_ctl(celt_enc, OPUS_SET_COMPLEXITY(value));\n        }\n        break;\n        case OPUS_GET_COMPLEXITY_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->silk_mode.complexity;\n        }\n        break;\n        case OPUS_SET_INBAND_FEC_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if(value<0 || value>1)\n            {\n               goto bad_arg;\n            }\n            st->silk_mode.useInBandFEC = value;\n        }\n        break;\n        case OPUS_GET_INBAND_FEC_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->silk_mode.useInBandFEC;\n        }\n        break;\n        case OPUS_SET_PACKET_LOSS_PERC_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if (value < 0 || value > 100)\n            {\n               goto bad_arg;\n            }\n            st->silk_mode.packetLossPercentage = value;\n            celt_encoder_ctl(celt_enc, OPUS_SET_PACKET_LOSS_PERC(value));\n        }\n        break;\n        case OPUS_GET_PACKET_LOSS_PERC_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->silk_mode.packetLossPercentage;\n        }\n        break;\n        case OPUS_SET_VBR_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if(value<0 || value>1)\n            {\n               goto bad_arg;\n            }\n            st->use_vbr = value;\n            st->silk_mode.useCBR = 1-value;\n        }\n        break;\n        case OPUS_GET_VBR_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->use_vbr;\n        }\n        break;\n        case OPUS_SET_VOICE_RATIO_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if (value<-1 || value>100)\n            {\n               goto bad_arg;\n            }\n            st->voice_ratio = value;\n        }\n        break;\n        case OPUS_GET_VOICE_RATIO_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->voice_ratio;\n        }\n        break;\n        case OPUS_SET_VBR_CONSTRAINT_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if(value<0 || value>1)\n            {\n               goto bad_arg;\n            }\n            st->vbr_constraint = value;\n        }\n        break;\n        case OPUS_GET_VBR_CONSTRAINT_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->vbr_constraint;\n        }\n        break;\n        case OPUS_SET_SIGNAL_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if(value!=OPUS_AUTO && value!=OPUS_SIGNAL_VOICE && value!=OPUS_SIGNAL_MUSIC)\n            {\n               goto bad_arg;\n            }\n            st->signal_type = value;\n        }\n        break;\n        case OPUS_GET_SIGNAL_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->signal_type;\n        }\n        break;\n        case OPUS_GET_LOOKAHEAD_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->Fs/400;\n            if (st->application != OPUS_APPLICATION_RESTRICTED_LOWDELAY)\n                *value += st->delay_compensation;\n        }\n        break;\n        case OPUS_GET_SAMPLE_RATE_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->Fs;\n        }\n        break;\n        case OPUS_GET_FINAL_RANGE_REQUEST:\n        {\n            opus_uint32 *value = va_arg(ap, opus_uint32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->rangeFinal;\n        }\n        break;\n        case OPUS_SET_LSB_DEPTH_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if (value<8 || value>24)\n            {\n               goto bad_arg;\n            }\n            st->lsb_depth=value;\n        }\n        break;\n        case OPUS_GET_LSB_DEPTH_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->lsb_depth;\n        }\n        break;\n        case OPUS_SET_EXPERT_FRAME_DURATION_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if (value != OPUS_FRAMESIZE_ARG    && value != OPUS_FRAMESIZE_2_5_MS &&\n                value != OPUS_FRAMESIZE_5_MS   && value != OPUS_FRAMESIZE_10_MS  &&\n                value != OPUS_FRAMESIZE_20_MS  && value != OPUS_FRAMESIZE_40_MS  &&\n                value != OPUS_FRAMESIZE_60_MS  && value != OPUS_FRAMESIZE_80_MS  &&\n                value != OPUS_FRAMESIZE_100_MS && value != OPUS_FRAMESIZE_120_MS)\n            {\n               goto bad_arg;\n            }\n            st->variable_duration = value;\n            celt_encoder_ctl(celt_enc, OPUS_SET_EXPERT_FRAME_DURATION(value));\n        }\n        break;\n        case OPUS_GET_EXPERT_FRAME_DURATION_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            *value = st->variable_duration;\n        }\n        break;\n        case OPUS_SET_PREDICTION_DISABLED_REQUEST:\n        {\n           opus_int32 value = va_arg(ap, opus_int32);\n           if (value > 1 || value < 0)\n              goto bad_arg;\n           st->silk_mode.reducedDependency = value;\n        }\n        break;\n        case OPUS_GET_PREDICTION_DISABLED_REQUEST:\n        {\n           opus_int32 *value = va_arg(ap, opus_int32*);\n           if (!value)\n              goto bad_arg;\n           *value = st->silk_mode.reducedDependency;\n        }\n        break;\n        case OPUS_SET_PHASE_INVERSION_DISABLED_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if(value<0 || value>1)\n            {\n               goto bad_arg;\n            }\n            celt_encoder_ctl(celt_enc, OPUS_SET_PHASE_INVERSION_DISABLED(value));\n        }\n        break;\n        case OPUS_GET_PHASE_INVERSION_DISABLED_REQUEST:\n        {\n            opus_int32 *value = va_arg(ap, opus_int32*);\n            if (!value)\n            {\n               goto bad_arg;\n            }\n            celt_encoder_ctl(celt_enc, OPUS_GET_PHASE_INVERSION_DISABLED(value));\n        }\n        break;\n        case OPUS_RESET_STATE:\n        {\n           void *silk_enc;\n           silk_EncControlStruct dummy;\n           char *start;\n           silk_enc = (char*)st+st->silk_enc_offset;\n#ifndef DISABLE_FLOAT_API\n           tonality_analysis_reset(&st->analysis);\n#endif\n\n           start = (char*)&st->OPUS_ENCODER_RESET_START;\n           OPUS_CLEAR(start, sizeof(OpusEncoder) - (start - (char*)st));\n\n           celt_encoder_ctl(celt_enc, OPUS_RESET_STATE);\n           silk_InitEncoder( silk_enc, st->arch, &dummy );\n           st->stream_channels = st->channels;\n           st->hybrid_stereo_width_Q14 = 1 << 14;\n           st->prev_HB_gain = Q15ONE;\n           st->first = 1;\n           st->mode = MODE_HYBRID;\n           st->bandwidth = OPUS_BANDWIDTH_FULLBAND;\n           st->variable_HP_smth2_Q15 = silk_LSHIFT( silk_lin2log( VARIABLE_HP_MIN_CUTOFF_HZ ), 8 );\n        }\n        break;\n        case OPUS_SET_FORCE_MODE_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            if ((value < MODE_SILK_ONLY || value > MODE_CELT_ONLY) && value != OPUS_AUTO)\n            {\n               goto bad_arg;\n            }\n            st->user_forced_mode = value;\n        }\n        break;\n        case OPUS_SET_LFE_REQUEST:\n        {\n            opus_int32 value = va_arg(ap, opus_int32);\n            st->lfe = value;\n            ret = celt_encoder_ctl(celt_enc, OPUS_SET_LFE(value));\n        }\n        break;\n        case OPUS_SET_ENERGY_MASK_REQUEST:\n        {\n            opus_val16 *value = va_arg(ap, opus_val16*);\n            st->energy_masking = value;\n            ret = celt_encoder_ctl(celt_enc, OPUS_SET_ENERGY_MASK(value));\n        }\n        break;\n\n        case CELT_GET_MODE_REQUEST:\n        {\n           const CELTMode ** value = va_arg(ap, const CELTMode**);\n           if (!value)\n           {\n              goto bad_arg;\n           }\n           ret = celt_encoder_ctl(celt_enc, CELT_GET_MODE(value));\n        }\n        break;\n        default:\n            /* fprintf(stderr, \"unknown opus_encoder_ctl() request: %d\", request);*/\n            ret = OPUS_UNIMPLEMENTED;\n            break;\n    }\n    va_end(ap);\n    return ret;\nbad_arg:\n    va_end(ap);\n    return OPUS_BAD_ARG;\n}\n\nvoid opus_encoder_destroy(OpusEncoder *st)\n{\n    opus_free(st);\n}\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/opus_multistream.c",
    "content": "/* Copyright (c) 2011 Xiph.Org Foundation\n   Written by Jean-Marc Valin */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include \"opus_multistream.h\"\n#include \"opus.h\"\n#include \"opus_private.h\"\n#include \"stack_alloc.h\"\n#include <stdarg.h>\n#include \"float_cast.h\"\n#include \"os_support.h\"\n\n\nint validate_layout(const ChannelLayout *layout)\n{\n   int i, max_channel;\n\n   max_channel = layout->nb_streams+layout->nb_coupled_streams;\n   if (max_channel>255)\n      return 0;\n   for (i=0;i<layout->nb_channels;i++)\n   {\n      if (layout->mapping[i] >= max_channel && layout->mapping[i] != 255)\n         return 0;\n   }\n   return 1;\n}\n\n\nint get_left_channel(const ChannelLayout *layout, int stream_id, int prev)\n{\n   int i;\n   i = (prev<0) ? 0 : prev+1;\n   for (;i<layout->nb_channels;i++)\n   {\n      if (layout->mapping[i]==stream_id*2)\n         return i;\n   }\n   return -1;\n}\n\nint get_right_channel(const ChannelLayout *layout, int stream_id, int prev)\n{\n   int i;\n   i = (prev<0) ? 0 : prev+1;\n   for (;i<layout->nb_channels;i++)\n   {\n      if (layout->mapping[i]==stream_id*2+1)\n         return i;\n   }\n   return -1;\n}\n\nint get_mono_channel(const ChannelLayout *layout, int stream_id, int prev)\n{\n   int i;\n   i = (prev<0) ? 0 : prev+1;\n   for (;i<layout->nb_channels;i++)\n   {\n      if (layout->mapping[i]==stream_id+layout->nb_coupled_streams)\n         return i;\n   }\n   return -1;\n}\n\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/opus_multistream_decoder.c",
    "content": "/* Copyright (c) 2011 Xiph.Org Foundation\n   Written by Jean-Marc Valin */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include \"opus_multistream.h\"\n#include \"opus.h\"\n#include \"opus_private.h\"\n#include \"stack_alloc.h\"\n#include <stdarg.h>\n#include \"float_cast.h\"\n#include \"os_support.h\"\n\nstruct OpusMSDecoder {\n   ChannelLayout layout;\n   /* Decoder states go here */\n};\n\n\n\n\n/* DECODER */\n\nopus_int32 opus_multistream_decoder_get_size(int nb_streams, int nb_coupled_streams)\n{\n   int coupled_size;\n   int mono_size;\n\n   if(nb_streams<1||nb_coupled_streams>nb_streams||nb_coupled_streams<0)return 0;\n   coupled_size = opus_decoder_get_size(2);\n   mono_size = opus_decoder_get_size(1);\n   return align(sizeof(OpusMSDecoder))\n         + nb_coupled_streams * align(coupled_size)\n         + (nb_streams-nb_coupled_streams) * align(mono_size);\n}\n\nint opus_multistream_decoder_init(\n      OpusMSDecoder *st,\n      opus_int32 Fs,\n      int channels,\n      int streams,\n      int coupled_streams,\n      const unsigned char *mapping\n)\n{\n   int coupled_size;\n   int mono_size;\n   int i, ret;\n   char *ptr;\n\n   if ((channels>255) || (channels<1) || (coupled_streams>streams) ||\n       (streams<1) || (coupled_streams<0) || (streams>255-coupled_streams))\n      return OPUS_BAD_ARG;\n\n   st->layout.nb_channels = channels;\n   st->layout.nb_streams = streams;\n   st->layout.nb_coupled_streams = coupled_streams;\n\n   for (i=0;i<st->layout.nb_channels;i++)\n      st->layout.mapping[i] = mapping[i];\n   if (!validate_layout(&st->layout))\n      return OPUS_BAD_ARG;\n\n   ptr = (char*)st + align(sizeof(OpusMSDecoder));\n   coupled_size = opus_decoder_get_size(2);\n   mono_size = opus_decoder_get_size(1);\n\n   for (i=0;i<st->layout.nb_coupled_streams;i++)\n   {\n      ret=opus_decoder_init((OpusDecoder*)ptr, Fs, 2);\n      if(ret!=OPUS_OK)return ret;\n      ptr += align(coupled_size);\n   }\n   for (;i<st->layout.nb_streams;i++)\n   {\n      ret=opus_decoder_init((OpusDecoder*)ptr, Fs, 1);\n      if(ret!=OPUS_OK)return ret;\n      ptr += align(mono_size);\n   }\n   return OPUS_OK;\n}\n\n\nOpusMSDecoder *opus_multistream_decoder_create(\n      opus_int32 Fs,\n      int channels,\n      int streams,\n      int coupled_streams,\n      const unsigned char *mapping,\n      int *error\n)\n{\n   int ret;\n   OpusMSDecoder *st;\n   if ((channels>255) || (channels<1) || (coupled_streams>streams) ||\n       (streams<1) || (coupled_streams<0) || (streams>255-coupled_streams))\n   {\n      if (error)\n         *error = OPUS_BAD_ARG;\n      return NULL;\n   }\n   st = (OpusMSDecoder *)opus_alloc(opus_multistream_decoder_get_size(streams, coupled_streams));\n   if (st==NULL)\n   {\n      if (error)\n         *error = OPUS_ALLOC_FAIL;\n      return NULL;\n   }\n   ret = opus_multistream_decoder_init(st, Fs, channels, streams, coupled_streams, mapping);\n   if (error)\n      *error = ret;\n   if (ret != OPUS_OK)\n   {\n      opus_free(st);\n      st = NULL;\n   }\n   return st;\n}\n\ntypedef void (*opus_copy_channel_out_func)(\n  void *dst,\n  int dst_stride,\n  int dst_channel,\n  const opus_val16 *src,\n  int src_stride,\n  int frame_size\n);\n\nstatic int opus_multistream_packet_validate(const unsigned char *data,\n      opus_int32 len, int nb_streams, opus_int32 Fs)\n{\n   int s;\n   int count;\n   unsigned char toc;\n   opus_int16 size[48];\n   int samples=0;\n   opus_int32 packet_offset;\n\n   for (s=0;s<nb_streams;s++)\n   {\n      int tmp_samples;\n      if (len<=0)\n         return OPUS_INVALID_PACKET;\n      count = opus_packet_parse_impl(data, len, s!=nb_streams-1, &toc, NULL,\n                                     size, NULL, &packet_offset);\n      if (count<0)\n         return count;\n      tmp_samples = opus_packet_get_nb_samples(data, packet_offset, Fs);\n      if (s!=0 && samples != tmp_samples)\n         return OPUS_INVALID_PACKET;\n      samples = tmp_samples;\n      data += packet_offset;\n      len -= packet_offset;\n   }\n   return samples;\n}\n\nstatic int opus_multistream_decode_native(\n      OpusMSDecoder *st,\n      const unsigned char *data,\n      opus_int32 len,\n      void *pcm,\n      opus_copy_channel_out_func copy_channel_out,\n      int frame_size,\n      int decode_fec,\n      int soft_clip\n)\n{\n   opus_int32 Fs;\n   int coupled_size;\n   int mono_size;\n   int s, c;\n   char *ptr;\n   int do_plc=0;\n   VARDECL(opus_val16, buf);\n   ALLOC_STACK;\n\n   /* Limit frame_size to avoid excessive stack allocations. */\n   opus_multistream_decoder_ctl(st, OPUS_GET_SAMPLE_RATE(&Fs));\n   frame_size = IMIN(frame_size, Fs/25*3);\n   ALLOC(buf, 2*frame_size, opus_val16);\n   ptr = (char*)st + align(sizeof(OpusMSDecoder));\n   coupled_size = opus_decoder_get_size(2);\n   mono_size = opus_decoder_get_size(1);\n\n   if (len==0)\n      do_plc = 1;\n   if (len < 0)\n   {\n      RESTORE_STACK;\n      return OPUS_BAD_ARG;\n   }\n   if (!do_plc && len < 2*st->layout.nb_streams-1)\n   {\n      RESTORE_STACK;\n      return OPUS_INVALID_PACKET;\n   }\n   if (!do_plc)\n   {\n      int ret = opus_multistream_packet_validate(data, len, st->layout.nb_streams, Fs);\n      if (ret < 0)\n      {\n         RESTORE_STACK;\n         return ret;\n      } else if (ret > frame_size)\n      {\n         RESTORE_STACK;\n         return OPUS_BUFFER_TOO_SMALL;\n      }\n   }\n   for (s=0;s<st->layout.nb_streams;s++)\n   {\n      OpusDecoder *dec;\n      opus_int32 packet_offset;\n      int ret;\n\n      dec = (OpusDecoder*)ptr;\n      ptr += (s < st->layout.nb_coupled_streams) ? align(coupled_size) : align(mono_size);\n\n      if (!do_plc && len<=0)\n      {\n         RESTORE_STACK;\n         return OPUS_INTERNAL_ERROR;\n      }\n      packet_offset = 0;\n      ret = opus_decode_native(dec, data, len, buf, frame_size, decode_fec, s!=st->layout.nb_streams-1, &packet_offset, soft_clip);\n      data += packet_offset;\n      len -= packet_offset;\n      if (ret <= 0)\n      {\n         RESTORE_STACK;\n         return ret;\n      }\n      frame_size = ret;\n      if (s < st->layout.nb_coupled_streams)\n      {\n         int chan, prev;\n         prev = -1;\n         /* Copy \"left\" audio to the channel(s) where it belongs */\n         while ( (chan = get_left_channel(&st->layout, s, prev)) != -1)\n         {\n            (*copy_channel_out)(pcm, st->layout.nb_channels, chan,\n               buf, 2, frame_size);\n            prev = chan;\n         }\n         prev = -1;\n         /* Copy \"right\" audio to the channel(s) where it belongs */\n         while ( (chan = get_right_channel(&st->layout, s, prev)) != -1)\n         {\n            (*copy_channel_out)(pcm, st->layout.nb_channels, chan,\n               buf+1, 2, frame_size);\n            prev = chan;\n         }\n      } else {\n         int chan, prev;\n         prev = -1;\n         /* Copy audio to the channel(s) where it belongs */\n         while ( (chan = get_mono_channel(&st->layout, s, prev)) != -1)\n         {\n            (*copy_channel_out)(pcm, st->layout.nb_channels, chan,\n               buf, 1, frame_size);\n            prev = chan;\n         }\n      }\n   }\n   /* Handle muted channels */\n   for (c=0;c<st->layout.nb_channels;c++)\n   {\n      if (st->layout.mapping[c] == 255)\n      {\n         (*copy_channel_out)(pcm, st->layout.nb_channels, c,\n            NULL, 0, frame_size);\n      }\n   }\n   RESTORE_STACK;\n   return frame_size;\n}\n\n#if !defined(DISABLE_FLOAT_API)\nstatic void opus_copy_channel_out_float(\n  void *dst,\n  int dst_stride,\n  int dst_channel,\n  const opus_val16 *src,\n  int src_stride,\n  int frame_size\n)\n{\n   float *float_dst;\n   opus_int32 i;\n   float_dst = (float*)dst;\n   if (src != NULL)\n   {\n      for (i=0;i<frame_size;i++)\n#if defined(FIXED_POINT)\n         float_dst[i*dst_stride+dst_channel] = (1/32768.f)*src[i*src_stride];\n#else\n         float_dst[i*dst_stride+dst_channel] = src[i*src_stride];\n#endif\n   }\n   else\n   {\n      for (i=0;i<frame_size;i++)\n         float_dst[i*dst_stride+dst_channel] = 0;\n   }\n}\n#endif\n\nstatic void opus_copy_channel_out_short(\n  void *dst,\n  int dst_stride,\n  int dst_channel,\n  const opus_val16 *src,\n  int src_stride,\n  int frame_size\n)\n{\n   opus_int16 *short_dst;\n   opus_int32 i;\n   short_dst = (opus_int16*)dst;\n   if (src != NULL)\n   {\n      for (i=0;i<frame_size;i++)\n#if defined(FIXED_POINT)\n         short_dst[i*dst_stride+dst_channel] = src[i*src_stride];\n#else\n         short_dst[i*dst_stride+dst_channel] = FLOAT2INT16(src[i*src_stride]);\n#endif\n   }\n   else\n   {\n      for (i=0;i<frame_size;i++)\n         short_dst[i*dst_stride+dst_channel] = 0;\n   }\n}\n\n\n\n#ifdef FIXED_POINT\nint opus_multistream_decode(\n      OpusMSDecoder *st,\n      const unsigned char *data,\n      opus_int32 len,\n      opus_int16 *pcm,\n      int frame_size,\n      int decode_fec\n)\n{\n   return opus_multistream_decode_native(st, data, len,\n       pcm, opus_copy_channel_out_short, frame_size, decode_fec, 0);\n}\n\n#ifndef DISABLE_FLOAT_API\nint opus_multistream_decode_float(OpusMSDecoder *st, const unsigned char *data,\n      opus_int32 len, float *pcm, int frame_size, int decode_fec)\n{\n   return opus_multistream_decode_native(st, data, len,\n       pcm, opus_copy_channel_out_float, frame_size, decode_fec, 0);\n}\n#endif\n\n#else\n\nint opus_multistream_decode(OpusMSDecoder *st, const unsigned char *data,\n      opus_int32 len, opus_int16 *pcm, int frame_size, int decode_fec)\n{\n   return opus_multistream_decode_native(st, data, len,\n       pcm, opus_copy_channel_out_short, frame_size, decode_fec, 1);\n}\n\nint opus_multistream_decode_float(\n      OpusMSDecoder *st,\n      const unsigned char *data,\n      opus_int32 len,\n      float *pcm,\n      int frame_size,\n      int decode_fec\n)\n{\n   return opus_multistream_decode_native(st, data, len,\n       pcm, opus_copy_channel_out_float, frame_size, decode_fec, 0);\n}\n#endif\n\nint opus_multistream_decoder_ctl(OpusMSDecoder *st, int request, ...)\n{\n   va_list ap;\n   int coupled_size, mono_size;\n   char *ptr;\n   int ret = OPUS_OK;\n\n   va_start(ap, request);\n\n   coupled_size = opus_decoder_get_size(2);\n   mono_size = opus_decoder_get_size(1);\n   ptr = (char*)st + align(sizeof(OpusMSDecoder));\n   switch (request)\n   {\n       case OPUS_GET_BANDWIDTH_REQUEST:\n       case OPUS_GET_SAMPLE_RATE_REQUEST:\n       case OPUS_GET_GAIN_REQUEST:\n       case OPUS_GET_LAST_PACKET_DURATION_REQUEST:\n       case OPUS_GET_PHASE_INVERSION_DISABLED_REQUEST:\n       {\n          OpusDecoder *dec;\n          /* For int32* GET params, just query the first stream */\n          opus_int32 *value = va_arg(ap, opus_int32*);\n          dec = (OpusDecoder*)ptr;\n          ret = opus_decoder_ctl(dec, request, value);\n       }\n       break;\n       case OPUS_GET_FINAL_RANGE_REQUEST:\n       {\n          int s;\n          opus_uint32 *value = va_arg(ap, opus_uint32*);\n          opus_uint32 tmp;\n          if (!value)\n          {\n             goto bad_arg;\n          }\n          *value = 0;\n          for (s=0;s<st->layout.nb_streams;s++)\n          {\n             OpusDecoder *dec;\n             dec = (OpusDecoder*)ptr;\n             if (s < st->layout.nb_coupled_streams)\n                ptr += align(coupled_size);\n             else\n                ptr += align(mono_size);\n             ret = opus_decoder_ctl(dec, request, &tmp);\n             if (ret != OPUS_OK) break;\n             *value ^= tmp;\n          }\n       }\n       break;\n       case OPUS_RESET_STATE:\n       {\n          int s;\n          for (s=0;s<st->layout.nb_streams;s++)\n          {\n             OpusDecoder *dec;\n\n             dec = (OpusDecoder*)ptr;\n             if (s < st->layout.nb_coupled_streams)\n                ptr += align(coupled_size);\n             else\n                ptr += align(mono_size);\n             ret = opus_decoder_ctl(dec, OPUS_RESET_STATE);\n             if (ret != OPUS_OK)\n                break;\n          }\n       }\n       break;\n       case OPUS_MULTISTREAM_GET_DECODER_STATE_REQUEST:\n       {\n          int s;\n          opus_int32 stream_id;\n          OpusDecoder **value;\n          stream_id = va_arg(ap, opus_int32);\n          if (stream_id<0 || stream_id >= st->layout.nb_streams)\n             ret = OPUS_BAD_ARG;\n          value = va_arg(ap, OpusDecoder**);\n          if (!value)\n          {\n             goto bad_arg;\n          }\n          for (s=0;s<stream_id;s++)\n          {\n             if (s < st->layout.nb_coupled_streams)\n                ptr += align(coupled_size);\n             else\n                ptr += align(mono_size);\n          }\n          *value = (OpusDecoder*)ptr;\n       }\n       break;\n       case OPUS_SET_GAIN_REQUEST:\n       case OPUS_SET_PHASE_INVERSION_DISABLED_REQUEST:\n       {\n          int s;\n          /* This works for int32 params */\n          opus_int32 value = va_arg(ap, opus_int32);\n          for (s=0;s<st->layout.nb_streams;s++)\n          {\n             OpusDecoder *dec;\n\n             dec = (OpusDecoder*)ptr;\n             if (s < st->layout.nb_coupled_streams)\n                ptr += align(coupled_size);\n             else\n                ptr += align(mono_size);\n             ret = opus_decoder_ctl(dec, request, value);\n             if (ret != OPUS_OK)\n                break;\n          }\n       }\n       break;\n       default:\n          ret = OPUS_UNIMPLEMENTED;\n       break;\n   }\n\n   va_end(ap);\n   return ret;\nbad_arg:\n   va_end(ap);\n   return OPUS_BAD_ARG;\n}\n\n\nvoid opus_multistream_decoder_destroy(OpusMSDecoder *st)\n{\n    opus_free(st);\n}\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/opus_multistream_encoder.c",
    "content": "/* Copyright (c) 2011 Xiph.Org Foundation\n   Written by Jean-Marc Valin */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include \"opus_multistream.h\"\n#include \"opus.h\"\n#include \"opus_private.h\"\n#include \"stack_alloc.h\"\n#include <stdarg.h>\n#include \"float_cast.h\"\n#include \"os_support.h\"\n#include \"mathops.h\"\n#include \"mdct.h\"\n#include \"modes.h\"\n#include \"bands.h\"\n#include \"quant_bands.h\"\n#include \"pitch.h\"\n\ntypedef struct {\n   int nb_streams;\n   int nb_coupled_streams;\n   unsigned char mapping[8];\n} VorbisLayout;\n\n/* Index is nb_channel-1*/\nstatic const VorbisLayout vorbis_mappings[8] = {\n      {1, 0, {0}},                      /* 1: mono */\n      {1, 1, {0, 1}},                   /* 2: stereo */\n      {2, 1, {0, 2, 1}},                /* 3: 1-d surround */\n      {2, 2, {0, 1, 2, 3}},             /* 4: quadraphonic surround */\n      {3, 2, {0, 4, 1, 2, 3}},          /* 5: 5-channel surround */\n      {4, 2, {0, 4, 1, 2, 3, 5}},       /* 6: 5.1 surround */\n      {4, 3, {0, 4, 1, 2, 3, 5, 6}},    /* 7: 6.1 surround */\n      {5, 3, {0, 6, 1, 2, 3, 4, 5, 7}}, /* 8: 7.1 surround */\n};\n\ntypedef void (*opus_copy_channel_in_func)(\n  opus_val16 *dst,\n  int dst_stride,\n  const void *src,\n  int src_stride,\n  int src_channel,\n  int frame_size\n);\n\ntypedef enum {\n  MAPPING_TYPE_NONE,\n  MAPPING_TYPE_SURROUND\n#ifdef ENABLE_EXPERIMENTAL_AMBISONICS\n  ,  /* Do not include comma at end of enumerator list */\n  MAPPING_TYPE_AMBISONICS\n#endif\n} MappingType;\n\nstruct OpusMSEncoder {\n   ChannelLayout layout;\n   int arch;\n   int lfe_stream;\n   int application;\n   int variable_duration;\n   MappingType mapping_type;\n   opus_int32 bitrate_bps;\n   /* Encoder states go here */\n   /* then opus_val32 window_mem[channels*120]; */\n   /* then opus_val32 preemph_mem[channels]; */\n};\n\nstatic opus_val32 *ms_get_preemph_mem(OpusMSEncoder *st)\n{\n   int s;\n   char *ptr;\n   int coupled_size, mono_size;\n\n   coupled_size = opus_encoder_get_size(2);\n   mono_size = opus_encoder_get_size(1);\n   ptr = (char*)st + align(sizeof(OpusMSEncoder));\n   for (s=0;s<st->layout.nb_streams;s++)\n   {\n      if (s < st->layout.nb_coupled_streams)\n         ptr += align(coupled_size);\n      else\n         ptr += align(mono_size);\n   }\n   /* void* cast avoids clang -Wcast-align warning */\n   return (opus_val32*)(void*)(ptr+st->layout.nb_channels*120*sizeof(opus_val32));\n}\n\nstatic opus_val32 *ms_get_window_mem(OpusMSEncoder *st)\n{\n   int s;\n   char *ptr;\n   int coupled_size, mono_size;\n\n   coupled_size = opus_encoder_get_size(2);\n   mono_size = opus_encoder_get_size(1);\n   ptr = (char*)st + align(sizeof(OpusMSEncoder));\n   for (s=0;s<st->layout.nb_streams;s++)\n   {\n      if (s < st->layout.nb_coupled_streams)\n         ptr += align(coupled_size);\n      else\n         ptr += align(mono_size);\n   }\n   /* void* cast avoids clang -Wcast-align warning */\n   return (opus_val32*)(void*)ptr;\n}\n\n#ifdef ENABLE_EXPERIMENTAL_AMBISONICS\nstatic int validate_ambisonics(int nb_channels, int *nb_streams, int *nb_coupled_streams)\n{\n   int order_plus_one;\n   int acn_channels;\n   int nondiegetic_channels;\n\n   order_plus_one = isqrt32(nb_channels);\n   acn_channels = order_plus_one * order_plus_one;\n   nondiegetic_channels = nb_channels - acn_channels;\n\n   if (order_plus_one < 1 || order_plus_one > 15 ||\n       (nondiegetic_channels != 0 && nondiegetic_channels != 2))\n      return 0;\n\n   if (nb_streams)\n      *nb_streams = acn_channels + (nondiegetic_channels != 0);\n   if (nb_coupled_streams)\n      *nb_coupled_streams = nondiegetic_channels != 0;\n   return 1;\n}\n#endif\n\nstatic int validate_encoder_layout(const ChannelLayout *layout)\n{\n   int s;\n   for (s=0;s<layout->nb_streams;s++)\n   {\n      if (s < layout->nb_coupled_streams)\n      {\n         if (get_left_channel(layout, s, -1)==-1)\n            return 0;\n         if (get_right_channel(layout, s, -1)==-1)\n            return 0;\n      } else {\n         if (get_mono_channel(layout, s, -1)==-1)\n            return 0;\n      }\n   }\n   return 1;\n}\n\nstatic void channel_pos(int channels, int pos[8])\n{\n   /* Position in the mix: 0 don't mix, 1: left, 2: center, 3:right */\n   if (channels==4)\n   {\n      pos[0]=1;\n      pos[1]=3;\n      pos[2]=1;\n      pos[3]=3;\n   } else if (channels==3||channels==5||channels==6)\n   {\n      pos[0]=1;\n      pos[1]=2;\n      pos[2]=3;\n      pos[3]=1;\n      pos[4]=3;\n      pos[5]=0;\n   } else if (channels==7)\n   {\n      pos[0]=1;\n      pos[1]=2;\n      pos[2]=3;\n      pos[3]=1;\n      pos[4]=3;\n      pos[5]=2;\n      pos[6]=0;\n   } else if (channels==8)\n   {\n      pos[0]=1;\n      pos[1]=2;\n      pos[2]=3;\n      pos[3]=1;\n      pos[4]=3;\n      pos[5]=1;\n      pos[6]=3;\n      pos[7]=0;\n   }\n}\n\n#if 1\n/* Computes a rough approximation of log2(2^a + 2^b) */\nstatic opus_val16 logSum(opus_val16 a, opus_val16 b)\n{\n   opus_val16 max;\n   opus_val32 diff;\n   opus_val16 frac;\n   static const opus_val16 diff_table[17] = {\n         QCONST16(0.5000000f, DB_SHIFT), QCONST16(0.2924813f, DB_SHIFT), QCONST16(0.1609640f, DB_SHIFT), QCONST16(0.0849625f, DB_SHIFT),\n         QCONST16(0.0437314f, DB_SHIFT), QCONST16(0.0221971f, DB_SHIFT), QCONST16(0.0111839f, DB_SHIFT), QCONST16(0.0056136f, DB_SHIFT),\n         QCONST16(0.0028123f, DB_SHIFT)\n   };\n   int low;\n   if (a>b)\n   {\n      max = a;\n      diff = SUB32(EXTEND32(a),EXTEND32(b));\n   } else {\n      max = b;\n      diff = SUB32(EXTEND32(b),EXTEND32(a));\n   }\n   if (!(diff < QCONST16(8.f, DB_SHIFT)))  /* inverted to catch NaNs */\n      return max;\n#ifdef FIXED_POINT\n   low = SHR32(diff, DB_SHIFT-1);\n   frac = SHL16(diff - SHL16(low, DB_SHIFT-1), 16-DB_SHIFT);\n#else\n   low = (int)floor(2*diff);\n   frac = 2*diff - low;\n#endif\n   return max + diff_table[low] + MULT16_16_Q15(frac, SUB16(diff_table[low+1], diff_table[low]));\n}\n#else\nopus_val16 logSum(opus_val16 a, opus_val16 b)\n{\n   return log2(pow(4, a)+ pow(4, b))/2;\n}\n#endif\n\nvoid surround_analysis(const CELTMode *celt_mode, const void *pcm, opus_val16 *bandLogE, opus_val32 *mem, opus_val32 *preemph_mem,\n      int len, int overlap, int channels, int rate, opus_copy_channel_in_func copy_channel_in, int arch\n)\n{\n   int c;\n   int i;\n   int LM;\n   int pos[8] = {0};\n   int upsample;\n   int frame_size;\n   int freq_size;\n   opus_val16 channel_offset;\n   opus_val32 bandE[21];\n   opus_val16 maskLogE[3][21];\n   VARDECL(opus_val32, in);\n   VARDECL(opus_val16, x);\n   VARDECL(opus_val32, freq);\n   SAVE_STACK;\n\n   upsample = resampling_factor(rate);\n   frame_size = len*upsample;\n   freq_size = IMIN(960, frame_size);\n\n   /* LM = log2(frame_size / 120) */\n   for (LM=0;LM<celt_mode->maxLM;LM++)\n      if (celt_mode->shortMdctSize<<LM==frame_size)\n         break;\n\n   ALLOC(in, frame_size+overlap, opus_val32);\n   ALLOC(x, len, opus_val16);\n   ALLOC(freq, freq_size, opus_val32);\n\n   channel_pos(channels, pos);\n\n   for (c=0;c<3;c++)\n      for (i=0;i<21;i++)\n         maskLogE[c][i] = -QCONST16(28.f, DB_SHIFT);\n\n   for (c=0;c<channels;c++)\n   {\n      int frame;\n      int nb_frames = frame_size/freq_size;\n      celt_assert(nb_frames*freq_size == frame_size);\n      OPUS_COPY(in, mem+c*overlap, overlap);\n      (*copy_channel_in)(x, 1, pcm, channels, c, len);\n      celt_preemphasis(x, in+overlap, frame_size, 1, upsample, celt_mode->preemph, preemph_mem+c, 0);\n#ifndef FIXED_POINT\n      {\n         opus_val32 sum;\n         sum = celt_inner_prod(in, in, frame_size+overlap, 0);\n         /* This should filter out both NaNs and ridiculous signals that could\n            cause NaNs further down. */\n         if (!(sum < 1e18f) || celt_isnan(sum))\n         {\n            OPUS_CLEAR(in, frame_size+overlap);\n            preemph_mem[c] = 0;\n         }\n      }\n#endif\n      OPUS_CLEAR(bandE, 21);\n      for (frame=0;frame<nb_frames;frame++)\n      {\n         opus_val32 tmpE[21];\n         clt_mdct_forward(&celt_mode->mdct, in+960*frame, freq, celt_mode->window,\n               overlap, celt_mode->maxLM-LM, 1, arch);\n         if (upsample != 1)\n         {\n            int bound = freq_size/upsample;\n            for (i=0;i<bound;i++)\n               freq[i] *= upsample;\n            for (;i<freq_size;i++)\n               freq[i] = 0;\n         }\n\n         compute_band_energies(celt_mode, freq, tmpE, 21, 1, LM, arch);\n         /* If we have multiple frames, take the max energy. */\n         for (i=0;i<21;i++)\n            bandE[i] = MAX32(bandE[i], tmpE[i]);\n      }\n      amp2Log2(celt_mode, 21, 21, bandE, bandLogE+21*c, 1);\n      /* Apply spreading function with -6 dB/band going up and -12 dB/band going down. */\n      for (i=1;i<21;i++)\n         bandLogE[21*c+i] = MAX16(bandLogE[21*c+i], bandLogE[21*c+i-1]-QCONST16(1.f, DB_SHIFT));\n      for (i=19;i>=0;i--)\n         bandLogE[21*c+i] = MAX16(bandLogE[21*c+i], bandLogE[21*c+i+1]-QCONST16(2.f, DB_SHIFT));\n      if (pos[c]==1)\n      {\n         for (i=0;i<21;i++)\n            maskLogE[0][i] = logSum(maskLogE[0][i], bandLogE[21*c+i]);\n      } else if (pos[c]==3)\n      {\n         for (i=0;i<21;i++)\n            maskLogE[2][i] = logSum(maskLogE[2][i], bandLogE[21*c+i]);\n      } else if (pos[c]==2)\n      {\n         for (i=0;i<21;i++)\n         {\n            maskLogE[0][i] = logSum(maskLogE[0][i], bandLogE[21*c+i]-QCONST16(.5f, DB_SHIFT));\n            maskLogE[2][i] = logSum(maskLogE[2][i], bandLogE[21*c+i]-QCONST16(.5f, DB_SHIFT));\n         }\n      }\n#if 0\n      for (i=0;i<21;i++)\n         printf(\"%f \", bandLogE[21*c+i]);\n      float sum=0;\n      for (i=0;i<21;i++)\n         sum += bandLogE[21*c+i];\n      printf(\"%f \", sum/21);\n#endif\n      OPUS_COPY(mem+c*overlap, in+frame_size, overlap);\n   }\n   for (i=0;i<21;i++)\n      maskLogE[1][i] = MIN32(maskLogE[0][i],maskLogE[2][i]);\n   channel_offset = HALF16(celt_log2(QCONST32(2.f,14)/(channels-1)));\n   for (c=0;c<3;c++)\n      for (i=0;i<21;i++)\n         maskLogE[c][i] += channel_offset;\n#if 0\n   for (c=0;c<3;c++)\n   {\n      for (i=0;i<21;i++)\n         printf(\"%f \", maskLogE[c][i]);\n   }\n#endif\n   for (c=0;c<channels;c++)\n   {\n      opus_val16 *mask;\n      if (pos[c]!=0)\n      {\n         mask = &maskLogE[pos[c]-1][0];\n         for (i=0;i<21;i++)\n            bandLogE[21*c+i] = bandLogE[21*c+i] - mask[i];\n      } else {\n         for (i=0;i<21;i++)\n            bandLogE[21*c+i] = 0;\n      }\n#if 0\n      for (i=0;i<21;i++)\n         printf(\"%f \", bandLogE[21*c+i]);\n      printf(\"\\n\");\n#endif\n#if 0\n      float sum=0;\n      for (i=0;i<21;i++)\n         sum += bandLogE[21*c+i];\n      printf(\"%f \", sum/(float)QCONST32(21.f, DB_SHIFT));\n      printf(\"\\n\");\n#endif\n   }\n   RESTORE_STACK;\n}\n\nopus_int32 opus_multistream_encoder_get_size(int nb_streams, int nb_coupled_streams)\n{\n   int coupled_size;\n   int mono_size;\n\n   if(nb_streams<1||nb_coupled_streams>nb_streams||nb_coupled_streams<0)return 0;\n   coupled_size = opus_encoder_get_size(2);\n   mono_size = opus_encoder_get_size(1);\n   return align(sizeof(OpusMSEncoder))\n        + nb_coupled_streams * align(coupled_size)\n        + (nb_streams-nb_coupled_streams) * align(mono_size);\n}\n\nopus_int32 opus_multistream_surround_encoder_get_size(int channels, int mapping_family)\n{\n   int nb_streams;\n   int nb_coupled_streams;\n   opus_int32 size;\n\n   if (mapping_family==0)\n   {\n      if (channels==1)\n      {\n         nb_streams=1;\n         nb_coupled_streams=0;\n      } else if (channels==2)\n      {\n         nb_streams=1;\n         nb_coupled_streams=1;\n      } else\n         return 0;\n   } else if (mapping_family==1 && channels<=8 && channels>=1)\n   {\n      nb_streams=vorbis_mappings[channels-1].nb_streams;\n      nb_coupled_streams=vorbis_mappings[channels-1].nb_coupled_streams;\n   } else if (mapping_family==255)\n   {\n      nb_streams=channels;\n      nb_coupled_streams=0;\n#ifdef ENABLE_EXPERIMENTAL_AMBISONICS\n   } else if (mapping_family==254)\n   {\n      if (!validate_ambisonics(channels, &nb_streams, &nb_coupled_streams))\n         return 0;\n#endif\n   } else\n      return 0;\n   size = opus_multistream_encoder_get_size(nb_streams, nb_coupled_streams);\n   if (channels>2)\n   {\n      size += channels*(120*sizeof(opus_val32) + sizeof(opus_val32));\n   }\n   return size;\n}\n\nstatic int opus_multistream_encoder_init_impl(\n      OpusMSEncoder *st,\n      opus_int32 Fs,\n      int channels,\n      int streams,\n      int coupled_streams,\n      const unsigned char *mapping,\n      int application,\n      MappingType mapping_type\n)\n{\n   int coupled_size;\n   int mono_size;\n   int i, ret;\n   char *ptr;\n\n   if ((channels>255) || (channels<1) || (coupled_streams>streams) ||\n       (streams<1) || (coupled_streams<0) || (streams>255-coupled_streams))\n      return OPUS_BAD_ARG;\n\n   st->arch = opus_select_arch();\n   st->layout.nb_channels = channels;\n   st->layout.nb_streams = streams;\n   st->layout.nb_coupled_streams = coupled_streams;\n   if (mapping_type != MAPPING_TYPE_SURROUND)\n      st->lfe_stream = -1;\n   st->bitrate_bps = OPUS_AUTO;\n   st->application = application;\n   st->variable_duration = OPUS_FRAMESIZE_ARG;\n   for (i=0;i<st->layout.nb_channels;i++)\n      st->layout.mapping[i] = mapping[i];\n   if (!validate_layout(&st->layout))\n      return OPUS_BAD_ARG;\n   if (mapping_type == MAPPING_TYPE_SURROUND &&\n       !validate_encoder_layout(&st->layout))\n      return OPUS_BAD_ARG;\n#ifdef ENABLE_EXPERIMENTAL_AMBISONICS\n   if (mapping_type == MAPPING_TYPE_AMBISONICS &&\n       !validate_ambisonics(st->layout.nb_channels, NULL, NULL))\n      return OPUS_BAD_ARG;\n#endif\n   ptr = (char*)st + align(sizeof(OpusMSEncoder));\n   coupled_size = opus_encoder_get_size(2);\n   mono_size = opus_encoder_get_size(1);\n\n   for (i=0;i<st->layout.nb_coupled_streams;i++)\n   {\n      ret = opus_encoder_init((OpusEncoder*)ptr, Fs, 2, application);\n      if(ret!=OPUS_OK)return ret;\n      if (i==st->lfe_stream)\n         opus_encoder_ctl((OpusEncoder*)ptr, OPUS_SET_LFE(1));\n      ptr += align(coupled_size);\n   }\n   for (;i<st->layout.nb_streams;i++)\n   {\n      ret = opus_encoder_init((OpusEncoder*)ptr, Fs, 1, application);\n      if (i==st->lfe_stream)\n         opus_encoder_ctl((OpusEncoder*)ptr, OPUS_SET_LFE(1));\n      if(ret!=OPUS_OK)return ret;\n      ptr += align(mono_size);\n   }\n   if (mapping_type == MAPPING_TYPE_SURROUND)\n   {\n      OPUS_CLEAR(ms_get_preemph_mem(st), channels);\n      OPUS_CLEAR(ms_get_window_mem(st), channels*120);\n   }\n   st->mapping_type = mapping_type;\n   return OPUS_OK;\n}\n\nint opus_multistream_encoder_init(\n      OpusMSEncoder *st,\n      opus_int32 Fs,\n      int channels,\n      int streams,\n      int coupled_streams,\n      const unsigned char *mapping,\n      int application\n)\n{\n   return opus_multistream_encoder_init_impl(st, Fs, channels, streams,\n                                             coupled_streams, mapping,\n                                             application, MAPPING_TYPE_NONE);\n}\n\nint opus_multistream_surround_encoder_init(\n      OpusMSEncoder *st,\n      opus_int32 Fs,\n      int channels,\n      int mapping_family,\n      int *streams,\n      int *coupled_streams,\n      unsigned char *mapping,\n      int application\n)\n{\n   MappingType mapping_type;\n\n   if ((channels>255) || (channels<1))\n      return OPUS_BAD_ARG;\n   st->lfe_stream = -1;\n   if (mapping_family==0)\n   {\n      if (channels==1)\n      {\n         *streams=1;\n         *coupled_streams=0;\n         mapping[0]=0;\n      } else if (channels==2)\n      {\n         *streams=1;\n         *coupled_streams=1;\n         mapping[0]=0;\n         mapping[1]=1;\n      } else\n         return OPUS_UNIMPLEMENTED;\n   } else if (mapping_family==1 && channels<=8 && channels>=1)\n   {\n      int i;\n      *streams=vorbis_mappings[channels-1].nb_streams;\n      *coupled_streams=vorbis_mappings[channels-1].nb_coupled_streams;\n      for (i=0;i<channels;i++)\n         mapping[i] = vorbis_mappings[channels-1].mapping[i];\n      if (channels>=6)\n         st->lfe_stream = *streams-1;\n   } else if (mapping_family==255)\n   {\n      int i;\n      *streams=channels;\n      *coupled_streams=0;\n      for(i=0;i<channels;i++)\n         mapping[i] = i;\n#ifdef ENABLE_EXPERIMENTAL_AMBISONICS\n   } else if (mapping_family==254)\n   {\n      int i;\n      if (!validate_ambisonics(channels, streams, coupled_streams))\n         return OPUS_BAD_ARG;\n      for(i = 0; i < (*streams - *coupled_streams); i++)\n         mapping[i] = i + (*coupled_streams * 2);\n      for(i = 0; i < *coupled_streams * 2; i++)\n         mapping[i + (*streams - *coupled_streams)] = i;\n#endif\n   } else\n      return OPUS_UNIMPLEMENTED;\n\n   if (channels>2 && mapping_family==1) {\n      mapping_type = MAPPING_TYPE_SURROUND;\n#ifdef ENABLE_EXPERIMENTAL_AMBISONICS\n   } else if (mapping_family==254)\n   {\n      mapping_type = MAPPING_TYPE_AMBISONICS;\n#endif\n   } else\n   {\n      mapping_type = MAPPING_TYPE_NONE;\n   }\n   return opus_multistream_encoder_init_impl(st, Fs, channels, *streams,\n                                             *coupled_streams, mapping,\n                                             application, mapping_type);\n}\n\nOpusMSEncoder *opus_multistream_encoder_create(\n      opus_int32 Fs,\n      int channels,\n      int streams,\n      int coupled_streams,\n      const unsigned char *mapping,\n      int application,\n      int *error\n)\n{\n   int ret;\n   OpusMSEncoder *st;\n   if ((channels>255) || (channels<1) || (coupled_streams>streams) ||\n       (streams<1) || (coupled_streams<0) || (streams>255-coupled_streams))\n   {\n      if (error)\n         *error = OPUS_BAD_ARG;\n      return NULL;\n   }\n   st = (OpusMSEncoder *)opus_alloc(opus_multistream_encoder_get_size(streams, coupled_streams));\n   if (st==NULL)\n   {\n      if (error)\n         *error = OPUS_ALLOC_FAIL;\n      return NULL;\n   }\n   ret = opus_multistream_encoder_init(st, Fs, channels, streams, coupled_streams, mapping, application);\n   if (ret != OPUS_OK)\n   {\n      opus_free(st);\n      st = NULL;\n   }\n   if (error)\n      *error = ret;\n   return st;\n}\n\nOpusMSEncoder *opus_multistream_surround_encoder_create(\n      opus_int32 Fs,\n      int channels,\n      int mapping_family,\n      int *streams,\n      int *coupled_streams,\n      unsigned char *mapping,\n      int application,\n      int *error\n)\n{\n   int ret;\n   opus_int32 size;\n   OpusMSEncoder *st;\n   if ((channels>255) || (channels<1))\n   {\n      if (error)\n         *error = OPUS_BAD_ARG;\n      return NULL;\n   }\n   size = opus_multistream_surround_encoder_get_size(channels, mapping_family);\n   if (!size)\n   {\n      if (error)\n         *error = OPUS_UNIMPLEMENTED;\n      return NULL;\n   }\n   st = (OpusMSEncoder *)opus_alloc(size);\n   if (st==NULL)\n   {\n      if (error)\n         *error = OPUS_ALLOC_FAIL;\n      return NULL;\n   }\n   ret = opus_multistream_surround_encoder_init(st, Fs, channels, mapping_family, streams, coupled_streams, mapping, application);\n   if (ret != OPUS_OK)\n   {\n      opus_free(st);\n      st = NULL;\n   }\n   if (error)\n      *error = ret;\n   return st;\n}\n\nstatic void surround_rate_allocation(\n      OpusMSEncoder *st,\n      opus_int32 *rate,\n      int frame_size,\n      opus_int32 Fs\n      )\n{\n   int i;\n   opus_int32 channel_rate;\n   int stream_offset;\n   int lfe_offset;\n   int coupled_ratio; /* Q8 */\n   int lfe_ratio;     /* Q8 */\n   int nb_lfe;\n   int nb_uncoupled;\n   int nb_coupled;\n   int nb_normal;\n   opus_int32 channel_offset;\n   opus_int32 bitrate;\n   int total;\n\n   nb_lfe = (st->lfe_stream!=-1);\n   nb_coupled = st->layout.nb_coupled_streams;\n   nb_uncoupled = st->layout.nb_streams-nb_coupled-nb_lfe;\n   nb_normal = 2*nb_coupled + nb_uncoupled;\n\n   /* Give each non-LFE channel enough bits per channel for coding band energy. */\n   channel_offset = 40*IMAX(50, Fs/frame_size);\n\n   if (st->bitrate_bps==OPUS_AUTO)\n   {\n      bitrate = nb_normal*(channel_offset + Fs + 10000) + 8000*nb_lfe;\n   } else if (st->bitrate_bps==OPUS_BITRATE_MAX)\n   {\n      bitrate = nb_normal*300000 + nb_lfe*128000;\n   } else {\n      bitrate = st->bitrate_bps;\n   }\n\n   /* Give LFE some basic stream_channel allocation but never exceed 1/20 of the\n      total rate for the non-energy part to avoid problems at really low rate. */\n   lfe_offset = IMIN(bitrate/20, 3000) + 15*IMAX(50, Fs/frame_size);\n\n   /* We give each stream (coupled or uncoupled) a starting bitrate.\n      This models the main saving of coupled channels over uncoupled. */\n   stream_offset = (bitrate - channel_offset*nb_normal - lfe_offset*nb_lfe)/nb_normal/2;\n   stream_offset = IMAX(0, IMIN(20000, stream_offset));\n\n   /* Coupled streams get twice the mono rate after the offset is allocated. */\n   coupled_ratio = 512;\n   /* Should depend on the bitrate, for now we assume LFE gets 1/8 the bits of mono */\n   lfe_ratio = 32;\n\n   total = (nb_uncoupled<<8)         /* mono */\n         + coupled_ratio*nb_coupled /* stereo */\n         + nb_lfe*lfe_ratio;\n   channel_rate = 256*(opus_int64)(bitrate - lfe_offset*nb_lfe - stream_offset*(nb_coupled+nb_uncoupled) - channel_offset*nb_normal)/total;\n\n   for (i=0;i<st->layout.nb_streams;i++)\n   {\n      if (i<st->layout.nb_coupled_streams)\n         rate[i] = 2*channel_offset + IMAX(0, stream_offset+(channel_rate*coupled_ratio>>8));\n      else if (i!=st->lfe_stream)\n         rate[i] = channel_offset + IMAX(0, stream_offset + channel_rate);\n      else\n         rate[i] = IMAX(0, lfe_offset+(channel_rate*lfe_ratio>>8));\n   }\n}\n\n#ifdef ENABLE_EXPERIMENTAL_AMBISONICS\nstatic void ambisonics_rate_allocation(\n      OpusMSEncoder *st,\n      opus_int32 *rate,\n      int frame_size,\n      opus_int32 Fs\n      )\n{\n   int i;\n   int total_rate;\n   int directional_rate;\n   int nondirectional_rate;\n   int leftover_bits;\n\n   /* Each nondirectional channel gets (rate_ratio_num / rate_ratio_den) times\n    * as many bits as all other ambisonics channels.\n    */\n   const int rate_ratio_num = 4;\n   const int rate_ratio_den = 3;\n   const int nb_channels = st->layout.nb_streams + st->layout.nb_coupled_streams;\n   const int nb_nondirectional_channels = st->layout.nb_coupled_streams * 2 + 1;\n   const int nb_directional_channels = st->layout.nb_streams - 1;\n\n   if (st->bitrate_bps==OPUS_AUTO)\n   {\n      total_rate = (st->layout.nb_coupled_streams + st->layout.nb_streams) *\n         (Fs+60*Fs/frame_size) + st->layout.nb_streams * 15000;\n   } else if (st->bitrate_bps==OPUS_BITRATE_MAX)\n   {\n      total_rate = nb_channels * 320000;\n   } else\n   {\n      total_rate = st->bitrate_bps;\n   }\n\n   /* Let y be the directional rate, m be the num of nondirectional channels\n    *   m = (s + 1)\n    * and let p, q be integers such that the nondirectional rate is\n    *   m_rate = (p / q) * y\n    * Also let T be the total bitrate to allocate. Then\n    *   T = (n - m) * y + m * m_rate\n    * Solving for y,\n    *   y = (q * T) / (m * (p - q) + n * q)\n    */\n   directional_rate =\n      total_rate * rate_ratio_den\n      / (nb_nondirectional_channels * (rate_ratio_num - rate_ratio_den)\n       + nb_channels * rate_ratio_den);\n\n   /* Calculate the nondirectional rate.\n    *   m_rate = y * (p / q)\n    */\n   nondirectional_rate = directional_rate * rate_ratio_num / rate_ratio_den;\n\n   /* Calculate the leftover from truncation error.\n    *   leftover = T - y * (n - m) - m_rate * m\n    * Place leftover bits in omnidirectional channel.\n    */\n   leftover_bits = total_rate\n      - directional_rate * nb_directional_channels\n      - nondirectional_rate * nb_nondirectional_channels;\n\n   /* Calculate rates for each channel */\n   for (i = 0; i < st->layout.nb_streams; i++)\n   {\n      if (i < st->layout.nb_coupled_streams)\n      {\n         rate[i] = nondirectional_rate * 2;\n      } else if (i == st->layout.nb_coupled_streams)\n      {\n         rate[i] = nondirectional_rate + leftover_bits;\n      } else\n      {\n         rate[i] = directional_rate;\n      }\n   }\n}\n#endif /* ENABLE_EXPERIMENTAL_AMBISONICS */\n\nstatic opus_int32 rate_allocation(\n      OpusMSEncoder *st,\n      opus_int32 *rate,\n      int frame_size\n      )\n{\n   int i;\n   opus_int32 rate_sum=0;\n   opus_int32 Fs;\n   char *ptr;\n\n   ptr = (char*)st + align(sizeof(OpusMSEncoder));\n   opus_encoder_ctl((OpusEncoder*)ptr, OPUS_GET_SAMPLE_RATE(&Fs));\n\n#ifdef ENABLE_EXPERIMENTAL_AMBISONICS\n   if (st->mapping_type == MAPPING_TYPE_AMBISONICS) {\n     ambisonics_rate_allocation(st, rate, frame_size, Fs);\n   } else\n#endif\n   {\n     surround_rate_allocation(st, rate, frame_size, Fs);\n   }\n\n   for (i=0;i<st->layout.nb_streams;i++)\n   {\n      rate[i] = IMAX(rate[i], 500);\n      rate_sum += rate[i];\n   }\n   return rate_sum;\n}\n\n/* Max size in case the encoder decides to return six frames (6 x 20 ms = 120 ms) */\n#define MS_FRAME_TMP (6*1275+12)\nstatic int opus_multistream_encode_native\n(\n    OpusMSEncoder *st,\n    opus_copy_channel_in_func copy_channel_in,\n    const void *pcm,\n    int analysis_frame_size,\n    unsigned char *data,\n    opus_int32 max_data_bytes,\n    int lsb_depth,\n    downmix_func downmix,\n    int float_api\n)\n{\n   opus_int32 Fs;\n   int coupled_size;\n   int mono_size;\n   int s;\n   char *ptr;\n   int tot_size;\n   VARDECL(opus_val16, buf);\n   VARDECL(opus_val16, bandSMR);\n   unsigned char tmp_data[MS_FRAME_TMP];\n   OpusRepacketizer rp;\n   opus_int32 vbr;\n   const CELTMode *celt_mode;\n   opus_int32 bitrates[256];\n   opus_val16 bandLogE[42];\n   opus_val32 *mem = NULL;\n   opus_val32 *preemph_mem=NULL;\n   int frame_size;\n   opus_int32 rate_sum;\n   opus_int32 smallest_packet;\n   ALLOC_STACK;\n\n   if (st->mapping_type == MAPPING_TYPE_SURROUND)\n   {\n      preemph_mem = ms_get_preemph_mem(st);\n      mem = ms_get_window_mem(st);\n   }\n\n   ptr = (char*)st + align(sizeof(OpusMSEncoder));\n   opus_encoder_ctl((OpusEncoder*)ptr, OPUS_GET_SAMPLE_RATE(&Fs));\n   opus_encoder_ctl((OpusEncoder*)ptr, OPUS_GET_VBR(&vbr));\n   opus_encoder_ctl((OpusEncoder*)ptr, CELT_GET_MODE(&celt_mode));\n\n   frame_size = frame_size_select(analysis_frame_size, st->variable_duration, Fs);\n   if (frame_size <= 0)\n   {\n      RESTORE_STACK;\n      return OPUS_BAD_ARG;\n   }\n\n   /* Smallest packet the encoder can produce. */\n   smallest_packet = st->layout.nb_streams*2-1;\n   /* 100 ms needs an extra byte per stream for the ToC. */\n   if (Fs/frame_size == 10)\n     smallest_packet += st->layout.nb_streams;\n   if (max_data_bytes < smallest_packet)\n   {\n      RESTORE_STACK;\n      return OPUS_BUFFER_TOO_SMALL;\n   }\n   ALLOC(buf, 2*frame_size, opus_val16);\n   coupled_size = opus_encoder_get_size(2);\n   mono_size = opus_encoder_get_size(1);\n\n   ALLOC(bandSMR, 21*st->layout.nb_channels, opus_val16);\n   if (st->mapping_type == MAPPING_TYPE_SURROUND)\n   {\n      surround_analysis(celt_mode, pcm, bandSMR, mem, preemph_mem, frame_size, 120, st->layout.nb_channels, Fs, copy_channel_in, st->arch);\n   }\n\n   /* Compute bitrate allocation between streams (this could be a lot better) */\n   rate_sum = rate_allocation(st, bitrates, frame_size);\n\n   if (!vbr)\n   {\n      if (st->bitrate_bps == OPUS_AUTO)\n      {\n         max_data_bytes = IMIN(max_data_bytes, 3*rate_sum/(3*8*Fs/frame_size));\n      } else if (st->bitrate_bps != OPUS_BITRATE_MAX)\n      {\n         max_data_bytes = IMIN(max_data_bytes, IMAX(smallest_packet,\n                          3*st->bitrate_bps/(3*8*Fs/frame_size)));\n      }\n   }\n   ptr = (char*)st + align(sizeof(OpusMSEncoder));\n   for (s=0;s<st->layout.nb_streams;s++)\n   {\n      OpusEncoder *enc;\n      enc = (OpusEncoder*)ptr;\n      if (s < st->layout.nb_coupled_streams)\n         ptr += align(coupled_size);\n      else\n         ptr += align(mono_size);\n      opus_encoder_ctl(enc, OPUS_SET_BITRATE(bitrates[s]));\n      if (st->mapping_type == MAPPING_TYPE_SURROUND)\n      {\n         opus_int32 equiv_rate;\n         equiv_rate = st->bitrate_bps;\n         if (frame_size*50 < Fs)\n            equiv_rate -= 60*(Fs/frame_size - 50)*st->layout.nb_channels;\n         if (equiv_rate > 10000*st->layout.nb_channels)\n            opus_encoder_ctl(enc, OPUS_SET_BANDWIDTH(OPUS_BANDWIDTH_FULLBAND));\n         else if (equiv_rate > 7000*st->layout.nb_channels)\n            opus_encoder_ctl(enc, OPUS_SET_BANDWIDTH(OPUS_BANDWIDTH_SUPERWIDEBAND));\n         else if (equiv_rate > 5000*st->layout.nb_channels)\n            opus_encoder_ctl(enc, OPUS_SET_BANDWIDTH(OPUS_BANDWIDTH_WIDEBAND));\n         else\n            opus_encoder_ctl(enc, OPUS_SET_BANDWIDTH(OPUS_BANDWIDTH_NARROWBAND));\n         if (s < st->layout.nb_coupled_streams)\n         {\n            /* To preserve the spatial image, force stereo CELT on coupled streams */\n            opus_encoder_ctl(enc, OPUS_SET_FORCE_MODE(MODE_CELT_ONLY));\n            opus_encoder_ctl(enc, OPUS_SET_FORCE_CHANNELS(2));\n         }\n      }\n#ifdef ENABLE_EXPERIMENTAL_AMBISONICS\n      else if (st->mapping_type == MAPPING_TYPE_AMBISONICS) {\n        opus_encoder_ctl(enc, OPUS_SET_FORCE_MODE(MODE_CELT_ONLY));\n      }\n#endif\n   }\n\n   ptr = (char*)st + align(sizeof(OpusMSEncoder));\n   /* Counting ToC */\n   tot_size = 0;\n   for (s=0;s<st->layout.nb_streams;s++)\n   {\n      OpusEncoder *enc;\n      int len;\n      int curr_max;\n      int c1, c2;\n      int ret;\n\n      opus_repacketizer_init(&rp);\n      enc = (OpusEncoder*)ptr;\n      if (s < st->layout.nb_coupled_streams)\n      {\n         int i;\n         int left, right;\n         left = get_left_channel(&st->layout, s, -1);\n         right = get_right_channel(&st->layout, s, -1);\n         (*copy_channel_in)(buf, 2,\n            pcm, st->layout.nb_channels, left, frame_size);\n         (*copy_channel_in)(buf+1, 2,\n            pcm, st->layout.nb_channels, right, frame_size);\n         ptr += align(coupled_size);\n         if (st->mapping_type == MAPPING_TYPE_SURROUND)\n         {\n            for (i=0;i<21;i++)\n            {\n               bandLogE[i] = bandSMR[21*left+i];\n               bandLogE[21+i] = bandSMR[21*right+i];\n            }\n         }\n         c1 = left;\n         c2 = right;\n      } else {\n         int i;\n         int chan = get_mono_channel(&st->layout, s, -1);\n         (*copy_channel_in)(buf, 1,\n            pcm, st->layout.nb_channels, chan, frame_size);\n         ptr += align(mono_size);\n         if (st->mapping_type == MAPPING_TYPE_SURROUND)\n         {\n            for (i=0;i<21;i++)\n               bandLogE[i] = bandSMR[21*chan+i];\n         }\n         c1 = chan;\n         c2 = -1;\n      }\n      if (st->mapping_type == MAPPING_TYPE_SURROUND)\n         opus_encoder_ctl(enc, OPUS_SET_ENERGY_MASK(bandLogE));\n      /* number of bytes left (+Toc) */\n      curr_max = max_data_bytes - tot_size;\n      /* Reserve one byte for the last stream and two for the others */\n      curr_max -= IMAX(0,2*(st->layout.nb_streams-s-1)-1);\n      /* For 100 ms, reserve an extra byte per stream for the ToC */\n      if (Fs/frame_size == 10)\n        curr_max -= st->layout.nb_streams-s-1;\n      curr_max = IMIN(curr_max,MS_FRAME_TMP);\n      /* Repacketizer will add one or two bytes for self-delimited frames */\n      if (s != st->layout.nb_streams-1) curr_max -=  curr_max>253 ? 2 : 1;\n      if (!vbr && s == st->layout.nb_streams-1)\n         opus_encoder_ctl(enc, OPUS_SET_BITRATE(curr_max*(8*Fs/frame_size)));\n      len = opus_encode_native(enc, buf, frame_size, tmp_data, curr_max, lsb_depth,\n            pcm, analysis_frame_size, c1, c2, st->layout.nb_channels, downmix, float_api);\n      if (len<0)\n      {\n         RESTORE_STACK;\n         return len;\n      }\n      /* We need to use the repacketizer to add the self-delimiting lengths\n         while taking into account the fact that the encoder can now return\n         more than one frame at a time (e.g. 60 ms CELT-only) */\n      ret = opus_repacketizer_cat(&rp, tmp_data, len);\n      /* If the opus_repacketizer_cat() fails, then something's seriously wrong\n         with the encoder. */\n      if (ret != OPUS_OK)\n      {\n         RESTORE_STACK;\n         return OPUS_INTERNAL_ERROR;\n      }\n      len = opus_repacketizer_out_range_impl(&rp, 0, opus_repacketizer_get_nb_frames(&rp),\n            data, max_data_bytes-tot_size, s != st->layout.nb_streams-1, !vbr && s == st->layout.nb_streams-1);\n      data += len;\n      tot_size += len;\n   }\n   /*printf(\"\\n\");*/\n   RESTORE_STACK;\n   return tot_size;\n}\n\n#if !defined(DISABLE_FLOAT_API)\nstatic void opus_copy_channel_in_float(\n  opus_val16 *dst,\n  int dst_stride,\n  const void *src,\n  int src_stride,\n  int src_channel,\n  int frame_size\n)\n{\n   const float *float_src;\n   opus_int32 i;\n   float_src = (const float *)src;\n   for (i=0;i<frame_size;i++)\n#if defined(FIXED_POINT)\n      dst[i*dst_stride] = FLOAT2INT16(float_src[i*src_stride+src_channel]);\n#else\n      dst[i*dst_stride] = float_src[i*src_stride+src_channel];\n#endif\n}\n#endif\n\nstatic void opus_copy_channel_in_short(\n  opus_val16 *dst,\n  int dst_stride,\n  const void *src,\n  int src_stride,\n  int src_channel,\n  int frame_size\n)\n{\n   const opus_int16 *short_src;\n   opus_int32 i;\n   short_src = (const opus_int16 *)src;\n   for (i=0;i<frame_size;i++)\n#if defined(FIXED_POINT)\n      dst[i*dst_stride] = short_src[i*src_stride+src_channel];\n#else\n      dst[i*dst_stride] = (1/32768.f)*short_src[i*src_stride+src_channel];\n#endif\n}\n\n\n#ifdef FIXED_POINT\nint opus_multistream_encode(\n    OpusMSEncoder *st,\n    const opus_val16 *pcm,\n    int frame_size,\n    unsigned char *data,\n    opus_int32 max_data_bytes\n)\n{\n   return opus_multistream_encode_native(st, opus_copy_channel_in_short,\n      pcm, frame_size, data, max_data_bytes, 16, downmix_int, 0);\n}\n\n#ifndef DISABLE_FLOAT_API\nint opus_multistream_encode_float(\n    OpusMSEncoder *st,\n    const float *pcm,\n    int frame_size,\n    unsigned char *data,\n    opus_int32 max_data_bytes\n)\n{\n   return opus_multistream_encode_native(st, opus_copy_channel_in_float,\n      pcm, frame_size, data, max_data_bytes, 16, downmix_float, 1);\n}\n#endif\n\n#else\n\nint opus_multistream_encode_float\n(\n    OpusMSEncoder *st,\n    const opus_val16 *pcm,\n    int frame_size,\n    unsigned char *data,\n    opus_int32 max_data_bytes\n)\n{\n   return opus_multistream_encode_native(st, opus_copy_channel_in_float,\n      pcm, frame_size, data, max_data_bytes, 24, downmix_float, 1);\n}\n\nint opus_multistream_encode(\n    OpusMSEncoder *st,\n    const opus_int16 *pcm,\n    int frame_size,\n    unsigned char *data,\n    opus_int32 max_data_bytes\n)\n{\n   return opus_multistream_encode_native(st, opus_copy_channel_in_short,\n      pcm, frame_size, data, max_data_bytes, 16, downmix_int, 0);\n}\n#endif\n\nint opus_multistream_encoder_ctl(OpusMSEncoder *st, int request, ...)\n{\n   va_list ap;\n   int coupled_size, mono_size;\n   char *ptr;\n   int ret = OPUS_OK;\n\n   va_start(ap, request);\n\n   coupled_size = opus_encoder_get_size(2);\n   mono_size = opus_encoder_get_size(1);\n   ptr = (char*)st + align(sizeof(OpusMSEncoder));\n   switch (request)\n   {\n   case OPUS_SET_BITRATE_REQUEST:\n   {\n      opus_int32 value = va_arg(ap, opus_int32);\n      if (value != OPUS_AUTO && value != OPUS_BITRATE_MAX)\n      {\n         if (value <= 0)\n            goto bad_arg;\n         value = IMIN(300000*st->layout.nb_channels, IMAX(500*st->layout.nb_channels, value));\n      }\n      st->bitrate_bps = value;\n   }\n   break;\n   case OPUS_GET_BITRATE_REQUEST:\n   {\n      int s;\n      opus_int32 *value = va_arg(ap, opus_int32*);\n      if (!value)\n      {\n         goto bad_arg;\n      }\n      *value = 0;\n      for (s=0;s<st->layout.nb_streams;s++)\n      {\n         opus_int32 rate;\n         OpusEncoder *enc;\n         enc = (OpusEncoder*)ptr;\n         if (s < st->layout.nb_coupled_streams)\n            ptr += align(coupled_size);\n         else\n            ptr += align(mono_size);\n         opus_encoder_ctl(enc, request, &rate);\n         *value += rate;\n      }\n   }\n   break;\n   case OPUS_GET_LSB_DEPTH_REQUEST:\n   case OPUS_GET_VBR_REQUEST:\n   case OPUS_GET_APPLICATION_REQUEST:\n   case OPUS_GET_BANDWIDTH_REQUEST:\n   case OPUS_GET_COMPLEXITY_REQUEST:\n   case OPUS_GET_PACKET_LOSS_PERC_REQUEST:\n   case OPUS_GET_DTX_REQUEST:\n   case OPUS_GET_VOICE_RATIO_REQUEST:\n   case OPUS_GET_VBR_CONSTRAINT_REQUEST:\n   case OPUS_GET_SIGNAL_REQUEST:\n   case OPUS_GET_LOOKAHEAD_REQUEST:\n   case OPUS_GET_SAMPLE_RATE_REQUEST:\n   case OPUS_GET_INBAND_FEC_REQUEST:\n   case OPUS_GET_FORCE_CHANNELS_REQUEST:\n   case OPUS_GET_PREDICTION_DISABLED_REQUEST:\n   case OPUS_GET_PHASE_INVERSION_DISABLED_REQUEST:\n   {\n      OpusEncoder *enc;\n      /* For int32* GET params, just query the first stream */\n      opus_int32 *value = va_arg(ap, opus_int32*);\n      enc = (OpusEncoder*)ptr;\n      ret = opus_encoder_ctl(enc, request, value);\n   }\n   break;\n   case OPUS_GET_FINAL_RANGE_REQUEST:\n   {\n      int s;\n      opus_uint32 *value = va_arg(ap, opus_uint32*);\n      opus_uint32 tmp;\n      if (!value)\n      {\n         goto bad_arg;\n      }\n      *value=0;\n      for (s=0;s<st->layout.nb_streams;s++)\n      {\n         OpusEncoder *enc;\n         enc = (OpusEncoder*)ptr;\n         if (s < st->layout.nb_coupled_streams)\n            ptr += align(coupled_size);\n         else\n            ptr += align(mono_size);\n         ret = opus_encoder_ctl(enc, request, &tmp);\n         if (ret != OPUS_OK) break;\n         *value ^= tmp;\n      }\n   }\n   break;\n   case OPUS_SET_LSB_DEPTH_REQUEST:\n   case OPUS_SET_COMPLEXITY_REQUEST:\n   case OPUS_SET_VBR_REQUEST:\n   case OPUS_SET_VBR_CONSTRAINT_REQUEST:\n   case OPUS_SET_MAX_BANDWIDTH_REQUEST:\n   case OPUS_SET_BANDWIDTH_REQUEST:\n   case OPUS_SET_SIGNAL_REQUEST:\n   case OPUS_SET_APPLICATION_REQUEST:\n   case OPUS_SET_INBAND_FEC_REQUEST:\n   case OPUS_SET_PACKET_LOSS_PERC_REQUEST:\n   case OPUS_SET_DTX_REQUEST:\n   case OPUS_SET_FORCE_MODE_REQUEST:\n   case OPUS_SET_FORCE_CHANNELS_REQUEST:\n   case OPUS_SET_PREDICTION_DISABLED_REQUEST:\n   case OPUS_SET_PHASE_INVERSION_DISABLED_REQUEST:\n   {\n      int s;\n      /* This works for int32 params */\n      opus_int32 value = va_arg(ap, opus_int32);\n      for (s=0;s<st->layout.nb_streams;s++)\n      {\n         OpusEncoder *enc;\n\n         enc = (OpusEncoder*)ptr;\n         if (s < st->layout.nb_coupled_streams)\n            ptr += align(coupled_size);\n         else\n            ptr += align(mono_size);\n         ret = opus_encoder_ctl(enc, request, value);\n         if (ret != OPUS_OK)\n            break;\n      }\n   }\n   break;\n   case OPUS_MULTISTREAM_GET_ENCODER_STATE_REQUEST:\n   {\n      int s;\n      opus_int32 stream_id;\n      OpusEncoder **value;\n      stream_id = va_arg(ap, opus_int32);\n      if (stream_id<0 || stream_id >= st->layout.nb_streams)\n         ret = OPUS_BAD_ARG;\n      value = va_arg(ap, OpusEncoder**);\n      if (!value)\n      {\n         goto bad_arg;\n      }\n      for (s=0;s<stream_id;s++)\n      {\n         if (s < st->layout.nb_coupled_streams)\n            ptr += align(coupled_size);\n         else\n            ptr += align(mono_size);\n      }\n      *value = (OpusEncoder*)ptr;\n   }\n   break;\n   case OPUS_SET_EXPERT_FRAME_DURATION_REQUEST:\n   {\n       opus_int32 value = va_arg(ap, opus_int32);\n       st->variable_duration = value;\n   }\n   break;\n   case OPUS_GET_EXPERT_FRAME_DURATION_REQUEST:\n   {\n       opus_int32 *value = va_arg(ap, opus_int32*);\n       if (!value)\n       {\n          goto bad_arg;\n       }\n       *value = st->variable_duration;\n   }\n   break;\n   case OPUS_RESET_STATE:\n   {\n      int s;\n      if (st->mapping_type == MAPPING_TYPE_SURROUND)\n      {\n         OPUS_CLEAR(ms_get_preemph_mem(st), st->layout.nb_channels);\n         OPUS_CLEAR(ms_get_window_mem(st), st->layout.nb_channels*120);\n      }\n      for (s=0;s<st->layout.nb_streams;s++)\n      {\n         OpusEncoder *enc;\n         enc = (OpusEncoder*)ptr;\n         if (s < st->layout.nb_coupled_streams)\n            ptr += align(coupled_size);\n         else\n            ptr += align(mono_size);\n         ret = opus_encoder_ctl(enc, OPUS_RESET_STATE);\n         if (ret != OPUS_OK)\n            break;\n      }\n   }\n   break;\n   default:\n      ret = OPUS_UNIMPLEMENTED;\n      break;\n   }\n\n   va_end(ap);\n   return ret;\nbad_arg:\n   va_end(ap);\n   return OPUS_BAD_ARG;\n}\n\nvoid opus_multistream_encoder_destroy(OpusMSEncoder *st)\n{\n    opus_free(st);\n}\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/opus_private.h",
    "content": "/* Copyright (c) 2012 Xiph.Org Foundation\n   Written by Jean-Marc Valin */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n\n#ifndef OPUS_PRIVATE_H\n#define OPUS_PRIVATE_H\n\n#include \"arch.h\"\n#include \"opus.h\"\n#include \"celt.h\"\n\n#include <stddef.h> /* offsetof */\n\nstruct OpusRepacketizer {\n   unsigned char toc;\n   int nb_frames;\n   const unsigned char *frames[48];\n   opus_int16 len[48];\n   int framesize;\n};\n\ntypedef struct ChannelLayout {\n   int nb_channels;\n   int nb_streams;\n   int nb_coupled_streams;\n   unsigned char mapping[256];\n} ChannelLayout;\n\nint validate_layout(const ChannelLayout *layout);\nint get_left_channel(const ChannelLayout *layout, int stream_id, int prev);\nint get_right_channel(const ChannelLayout *layout, int stream_id, int prev);\nint get_mono_channel(const ChannelLayout *layout, int stream_id, int prev);\n\n\n\n#define MODE_SILK_ONLY          1000\n#define MODE_HYBRID             1001\n#define MODE_CELT_ONLY          1002\n\n#define OPUS_SET_VOICE_RATIO_REQUEST         11018\n#define OPUS_GET_VOICE_RATIO_REQUEST         11019\n\n/** Configures the encoder's expected percentage of voice\n  * opposed to music or other signals.\n  *\n  * @note This interface is currently more aspiration than actuality. It's\n  * ultimately expected to bias an automatic signal classifier, but it currently\n  * just shifts the static bitrate to mode mapping around a little bit.\n  *\n  * @param[in] x <tt>int</tt>:   Voice percentage in the range 0-100, inclusive.\n  * @hideinitializer */\n#define OPUS_SET_VOICE_RATIO(x) OPUS_SET_VOICE_RATIO_REQUEST, __opus_check_int(x)\n/** Gets the encoder's configured voice ratio value, @see OPUS_SET_VOICE_RATIO\n  *\n  * @param[out] x <tt>int*</tt>:  Voice percentage in the range 0-100, inclusive.\n  * @hideinitializer */\n#define OPUS_GET_VOICE_RATIO(x) OPUS_GET_VOICE_RATIO_REQUEST, __opus_check_int_ptr(x)\n\n\n#define OPUS_SET_FORCE_MODE_REQUEST    11002\n#define OPUS_SET_FORCE_MODE(x) OPUS_SET_FORCE_MODE_REQUEST, __opus_check_int(x)\n\ntypedef void (*downmix_func)(const void *, opus_val32 *, int, int, int, int, int);\nvoid downmix_float(const void *_x, opus_val32 *sub, int subframe, int offset, int c1, int c2, int C);\nvoid downmix_int(const void *_x, opus_val32 *sub, int subframe, int offset, int c1, int c2, int C);\n\nint encode_size(int size, unsigned char *data);\n\nopus_int32 frame_size_select(opus_int32 frame_size, int variable_duration, opus_int32 Fs);\n\nopus_int32 opus_encode_native(OpusEncoder *st, const opus_val16 *pcm, int frame_size,\n      unsigned char *data, opus_int32 out_data_bytes, int lsb_depth,\n      const void *analysis_pcm, opus_int32 analysis_size, int c1, int c2,\n      int analysis_channels, downmix_func downmix, int float_api);\n\nint opus_decode_native(OpusDecoder *st, const unsigned char *data, opus_int32 len,\n      opus_val16 *pcm, int frame_size, int decode_fec, int self_delimited,\n      opus_int32 *packet_offset, int soft_clip);\n\n/* Make sure everything is properly aligned. */\nstatic OPUS_INLINE int align(int i)\n{\n    struct foo {char c; union { void* p; opus_int32 i; opus_val32 v; } u;};\n\n    unsigned int alignment = offsetof(struct foo, u);\n\n    /* Optimizing compilers should optimize div and multiply into and\n       for all sensible alignment values. */\n    return ((i + alignment - 1) / alignment) * alignment;\n}\n\nint opus_packet_parse_impl(const unsigned char *data, opus_int32 len,\n      int self_delimited, unsigned char *out_toc,\n      const unsigned char *frames[48], opus_int16 size[48],\n      int *payload_offset, opus_int32 *packet_offset);\n\nopus_int32 opus_repacketizer_out_range_impl(OpusRepacketizer *rp, int begin, int end,\n      unsigned char *data, opus_int32 maxlen, int self_delimited, int pad);\n\nint pad_frame(unsigned char *data, opus_int32 len, opus_int32 new_len);\n\n#endif /* OPUS_PRIVATE_H */\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/repacketizer.c",
    "content": "/* Copyright (c) 2011 Xiph.Org Foundation\n   Written by Jean-Marc Valin */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include \"opus.h\"\n#include \"opus_private.h\"\n#include \"os_support.h\"\n\n\nint opus_repacketizer_get_size(void)\n{\n   return sizeof(OpusRepacketizer);\n}\n\nOpusRepacketizer *opus_repacketizer_init(OpusRepacketizer *rp)\n{\n   rp->nb_frames = 0;\n   return rp;\n}\n\nOpusRepacketizer *opus_repacketizer_create(void)\n{\n   OpusRepacketizer *rp;\n   rp=(OpusRepacketizer *)opus_alloc(opus_repacketizer_get_size());\n   if(rp==NULL)return NULL;\n   return opus_repacketizer_init(rp);\n}\n\nvoid opus_repacketizer_destroy(OpusRepacketizer *rp)\n{\n   opus_free(rp);\n}\n\nstatic int opus_repacketizer_cat_impl(OpusRepacketizer *rp, const unsigned char *data, opus_int32 len, int self_delimited)\n{\n   unsigned char tmp_toc;\n   int curr_nb_frames,ret;\n   /* Set of check ToC */\n   if (len<1) return OPUS_INVALID_PACKET;\n   if (rp->nb_frames == 0)\n   {\n      rp->toc = data[0];\n      rp->framesize = opus_packet_get_samples_per_frame(data, 8000);\n   } else if ((rp->toc&0xFC) != (data[0]&0xFC))\n   {\n      /*fprintf(stderr, \"toc mismatch: 0x%x vs 0x%x\\n\", rp->toc, data[0]);*/\n      return OPUS_INVALID_PACKET;\n   }\n   curr_nb_frames = opus_packet_get_nb_frames(data, len);\n   if(curr_nb_frames<1) return OPUS_INVALID_PACKET;\n\n   /* Check the 120 ms maximum packet size */\n   if ((curr_nb_frames+rp->nb_frames)*rp->framesize > 960)\n   {\n      return OPUS_INVALID_PACKET;\n   }\n\n   ret=opus_packet_parse_impl(data, len, self_delimited, &tmp_toc, &rp->frames[rp->nb_frames], &rp->len[rp->nb_frames], NULL, NULL);\n   if(ret<1)return ret;\n\n   rp->nb_frames += curr_nb_frames;\n   return OPUS_OK;\n}\n\nint opus_repacketizer_cat(OpusRepacketizer *rp, const unsigned char *data, opus_int32 len)\n{\n   return opus_repacketizer_cat_impl(rp, data, len, 0);\n}\n\nint opus_repacketizer_get_nb_frames(OpusRepacketizer *rp)\n{\n   return rp->nb_frames;\n}\n\nopus_int32 opus_repacketizer_out_range_impl(OpusRepacketizer *rp, int begin, int end,\n      unsigned char *data, opus_int32 maxlen, int self_delimited, int pad)\n{\n   int i, count;\n   opus_int32 tot_size;\n   opus_int16 *len;\n   const unsigned char **frames;\n   unsigned char * ptr;\n\n   if (begin<0 || begin>=end || end>rp->nb_frames)\n   {\n      /*fprintf(stderr, \"%d %d %d\\n\", begin, end, rp->nb_frames);*/\n      return OPUS_BAD_ARG;\n   }\n   count = end-begin;\n\n   len = rp->len+begin;\n   frames = rp->frames+begin;\n   if (self_delimited)\n      tot_size = 1 + (len[count-1]>=252);\n   else\n      tot_size = 0;\n\n   ptr = data;\n   if (count==1)\n   {\n      /* Code 0 */\n      tot_size += len[0]+1;\n      if (tot_size > maxlen)\n         return OPUS_BUFFER_TOO_SMALL;\n      *ptr++ = rp->toc&0xFC;\n   } else if (count==2)\n   {\n      if (len[1] == len[0])\n      {\n         /* Code 1 */\n         tot_size += 2*len[0]+1;\n         if (tot_size > maxlen)\n            return OPUS_BUFFER_TOO_SMALL;\n         *ptr++ = (rp->toc&0xFC) | 0x1;\n      } else {\n         /* Code 2 */\n         tot_size += len[0]+len[1]+2+(len[0]>=252);\n         if (tot_size > maxlen)\n            return OPUS_BUFFER_TOO_SMALL;\n         *ptr++ = (rp->toc&0xFC) | 0x2;\n         ptr += encode_size(len[0], ptr);\n      }\n   }\n   if (count > 2 || (pad && tot_size < maxlen))\n   {\n      /* Code 3 */\n      int vbr;\n      int pad_amount=0;\n\n      /* Restart the process for the padding case */\n      ptr = data;\n      if (self_delimited)\n         tot_size = 1 + (len[count-1]>=252);\n      else\n         tot_size = 0;\n      vbr = 0;\n      for (i=1;i<count;i++)\n      {\n         if (len[i] != len[0])\n         {\n            vbr=1;\n            break;\n         }\n      }\n      if (vbr)\n      {\n         tot_size += 2;\n         for (i=0;i<count-1;i++)\n            tot_size += 1 + (len[i]>=252) + len[i];\n         tot_size += len[count-1];\n\n         if (tot_size > maxlen)\n            return OPUS_BUFFER_TOO_SMALL;\n         *ptr++ = (rp->toc&0xFC) | 0x3;\n         *ptr++ = count | 0x80;\n      } else {\n         tot_size += count*len[0]+2;\n         if (tot_size > maxlen)\n            return OPUS_BUFFER_TOO_SMALL;\n         *ptr++ = (rp->toc&0xFC) | 0x3;\n         *ptr++ = count;\n      }\n      pad_amount = pad ? (maxlen-tot_size) : 0;\n      if (pad_amount != 0)\n      {\n         int nb_255s;\n         data[1] |= 0x40;\n         nb_255s = (pad_amount-1)/255;\n         for (i=0;i<nb_255s;i++)\n            *ptr++ = 255;\n         *ptr++ = pad_amount-255*nb_255s-1;\n         tot_size += pad_amount;\n      }\n      if (vbr)\n      {\n         for (i=0;i<count-1;i++)\n            ptr += encode_size(len[i], ptr);\n      }\n   }\n   if (self_delimited) {\n      int sdlen = encode_size(len[count-1], ptr);\n      ptr += sdlen;\n   }\n   /* Copy the actual data */\n   for (i=0;i<count;i++)\n   {\n      /* Using OPUS_MOVE() instead of OPUS_COPY() in case we're doing in-place\n         padding from opus_packet_pad or opus_packet_unpad(). */\n      celt_assert(frames[i] + len[i] <= data || ptr <= frames[i]);\n      OPUS_MOVE(ptr, frames[i], len[i]);\n      ptr += len[i];\n   }\n   if (pad)\n   {\n      /* Fill padding with zeros. */\n      while (ptr<data+maxlen)\n         *ptr++=0;\n   }\n   return tot_size;\n}\n\nopus_int32 opus_repacketizer_out_range(OpusRepacketizer *rp, int begin, int end, unsigned char *data, opus_int32 maxlen)\n{\n   return opus_repacketizer_out_range_impl(rp, begin, end, data, maxlen, 0, 0);\n}\n\nopus_int32 opus_repacketizer_out(OpusRepacketizer *rp, unsigned char *data, opus_int32 maxlen)\n{\n   return opus_repacketizer_out_range_impl(rp, 0, rp->nb_frames, data, maxlen, 0, 0);\n}\n\nint opus_packet_pad(unsigned char *data, opus_int32 len, opus_int32 new_len)\n{\n   OpusRepacketizer rp;\n   opus_int32 ret;\n   if (len < 1)\n      return OPUS_BAD_ARG;\n   if (len==new_len)\n      return OPUS_OK;\n   else if (len > new_len)\n      return OPUS_BAD_ARG;\n   opus_repacketizer_init(&rp);\n   /* Moving payload to the end of the packet so we can do in-place padding */\n   OPUS_MOVE(data+new_len-len, data, len);\n   ret = opus_repacketizer_cat(&rp, data+new_len-len, len);\n   if (ret != OPUS_OK)\n      return ret;\n   ret = opus_repacketizer_out_range_impl(&rp, 0, rp.nb_frames, data, new_len, 0, 1);\n   if (ret > 0)\n      return OPUS_OK;\n   else\n      return ret;\n}\n\nopus_int32 opus_packet_unpad(unsigned char *data, opus_int32 len)\n{\n   OpusRepacketizer rp;\n   opus_int32 ret;\n   if (len < 1)\n      return OPUS_BAD_ARG;\n   opus_repacketizer_init(&rp);\n   ret = opus_repacketizer_cat(&rp, data, len);\n   if (ret < 0)\n      return ret;\n   ret = opus_repacketizer_out_range_impl(&rp, 0, rp.nb_frames, data, len, 0, 0);\n   celt_assert(ret > 0 && ret <= len);\n   return ret;\n}\n\nint opus_multistream_packet_pad(unsigned char *data, opus_int32 len, opus_int32 new_len, int nb_streams)\n{\n   int s;\n   int count;\n   unsigned char toc;\n   opus_int16 size[48];\n   opus_int32 packet_offset;\n   opus_int32 amount;\n\n   if (len < 1)\n      return OPUS_BAD_ARG;\n   if (len==new_len)\n      return OPUS_OK;\n   else if (len > new_len)\n      return OPUS_BAD_ARG;\n   amount = new_len - len;\n   /* Seek to last stream */\n   for (s=0;s<nb_streams-1;s++)\n   {\n      if (len<=0)\n         return OPUS_INVALID_PACKET;\n      count = opus_packet_parse_impl(data, len, 1, &toc, NULL,\n                                     size, NULL, &packet_offset);\n      if (count<0)\n         return count;\n      data += packet_offset;\n      len -= packet_offset;\n   }\n   return opus_packet_pad(data, len, len+amount);\n}\n\nopus_int32 opus_multistream_packet_unpad(unsigned char *data, opus_int32 len, int nb_streams)\n{\n   int s;\n   unsigned char toc;\n   opus_int16 size[48];\n   opus_int32 packet_offset;\n   OpusRepacketizer rp;\n   unsigned char *dst;\n   opus_int32 dst_len;\n\n   if (len < 1)\n      return OPUS_BAD_ARG;\n   dst = data;\n   dst_len = 0;\n   /* Unpad all frames */\n   for (s=0;s<nb_streams;s++)\n   {\n      opus_int32 ret;\n      int self_delimited = s!=nb_streams-1;\n      if (len<=0)\n         return OPUS_INVALID_PACKET;\n      opus_repacketizer_init(&rp);\n      ret = opus_packet_parse_impl(data, len, self_delimited, &toc, NULL,\n                                     size, NULL, &packet_offset);\n      if (ret<0)\n         return ret;\n      ret = opus_repacketizer_cat_impl(&rp, data, packet_offset, self_delimited);\n      if (ret < 0)\n         return ret;\n      ret = opus_repacketizer_out_range_impl(&rp, 0, rp.nb_frames, dst, len, self_delimited, 0);\n      if (ret < 0)\n         return ret;\n      else\n         dst_len += ret;\n      dst += ret;\n      data += packet_offset;\n      len -= packet_offset;\n   }\n   return dst_len;\n}\n\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/repacketizer_demo.c",
    "content": "/* Copyright (c) 2011 Xiph.Org Foundation\n   Written by Jean-Marc Valin */\n/*\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n   - Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n\n   - Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER\n   OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include \"opus.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#define MAX_PACKETOUT 32000\n\nvoid usage(char *argv0)\n{\n   fprintf(stderr, \"usage: %s [options] input_file output_file\\n\", argv0);\n}\n\nstatic void int_to_char(opus_uint32 i, unsigned char ch[4])\n{\n    ch[0] = i>>24;\n    ch[1] = (i>>16)&0xFF;\n    ch[2] = (i>>8)&0xFF;\n    ch[3] = i&0xFF;\n}\n\nstatic opus_uint32 char_to_int(unsigned char ch[4])\n{\n    return ((opus_uint32)ch[0]<<24) | ((opus_uint32)ch[1]<<16)\n         | ((opus_uint32)ch[2]<< 8) |  (opus_uint32)ch[3];\n}\n\nint main(int argc, char *argv[])\n{\n   int i, eof=0;\n   FILE *fin, *fout;\n   unsigned char packets[48][1500];\n   int len[48];\n   int rng[48];\n   OpusRepacketizer *rp;\n   unsigned char output_packet[MAX_PACKETOUT];\n   int merge = 1, split=0;\n\n   if (argc < 3)\n   {\n      usage(argv[0]);\n      return EXIT_FAILURE;\n   }\n   for (i=1;i<argc-2;i++)\n   {\n      if (strcmp(argv[i], \"-merge\")==0)\n      {\n         merge = atoi(argv[i+1]);\n         if(merge<1)\n         {\n            fprintf(stderr, \"-merge parameter must be at least 1.\\n\");\n            return EXIT_FAILURE;\n         }\n         if(merge>48)\n         {\n            fprintf(stderr, \"-merge parameter must be less than 48.\\n\");\n            return EXIT_FAILURE;\n         }\n         i++;\n      } else if (strcmp(argv[i], \"-split\")==0)\n         split = 1;\n      else\n      {\n         fprintf(stderr, \"Unknown option: %s\\n\", argv[i]);\n         usage(argv[0]);\n         return EXIT_FAILURE;\n      }\n   }\n   fin = fopen(argv[argc-2], \"r\");\n   if(fin==NULL)\n   {\n     fprintf(stderr, \"Error opening input file: %s\\n\", argv[argc-2]);\n     return EXIT_FAILURE;\n   }\n   fout = fopen(argv[argc-1], \"w\");\n   if(fout==NULL)\n   {\n     fprintf(stderr, \"Error opening output file: %s\\n\", argv[argc-1]);\n     fclose(fin);\n     return EXIT_FAILURE;\n   }\n\n   rp = opus_repacketizer_create();\n   while (!eof)\n   {\n      int err;\n      int nb_packets=merge;\n      opus_repacketizer_init(rp);\n      for (i=0;i<nb_packets;i++)\n      {\n         unsigned char ch[4];\n         err = fread(ch, 1, 4, fin);\n         len[i] = char_to_int(ch);\n         /*fprintf(stderr, \"in len = %d\\n\", len[i]);*/\n         if (len[i]>1500 || len[i]<0)\n         {\n             if (feof(fin))\n             {\n                eof = 1;\n             } else {\n                fprintf(stderr, \"Invalid payload length\\n\");\n                fclose(fin);\n                fclose(fout);\n                return EXIT_FAILURE;\n             }\n             break;\n         }\n         err = fread(ch, 1, 4, fin);\n         rng[i] = char_to_int(ch);\n         err = fread(packets[i], 1, len[i], fin);\n         if (feof(fin))\n         {\n            eof = 1;\n            break;\n         }\n         err = opus_repacketizer_cat(rp, packets[i], len[i]);\n         if (err!=OPUS_OK)\n         {\n            fprintf(stderr, \"opus_repacketizer_cat() failed: %s\\n\", opus_strerror(err));\n            break;\n         }\n      }\n      nb_packets = i;\n\n      if (eof)\n         break;\n\n      if (!split)\n      {\n         err = opus_repacketizer_out(rp, output_packet, MAX_PACKETOUT);\n         if (err>0) {\n            unsigned char int_field[4];\n            int_to_char(err, int_field);\n            if(fwrite(int_field, 1, 4, fout)!=4){\n               fprintf(stderr, \"Error writing.\\n\");\n               return EXIT_FAILURE;\n            }\n            int_to_char(rng[nb_packets-1], int_field);\n            if (fwrite(int_field, 1, 4, fout)!=4) {\n               fprintf(stderr, \"Error writing.\\n\");\n               return EXIT_FAILURE;\n            }\n            if (fwrite(output_packet, 1, err, fout)!=(unsigned)err) {\n               fprintf(stderr, \"Error writing.\\n\");\n               return EXIT_FAILURE;\n            }\n            /*fprintf(stderr, \"out len = %d\\n\", err);*/\n         } else {\n            fprintf(stderr, \"opus_repacketizer_out() failed: %s\\n\", opus_strerror(err));\n         }\n      } else {\n         int nb_frames = opus_repacketizer_get_nb_frames(rp);\n         for (i=0;i<nb_frames;i++)\n         {\n            err = opus_repacketizer_out_range(rp, i, i+1, output_packet, MAX_PACKETOUT);\n            if (err>0) {\n               unsigned char int_field[4];\n               int_to_char(err, int_field);\n               if (fwrite(int_field, 1, 4, fout)!=4) {\n                  fprintf(stderr, \"Error writing.\\n\");\n                  return EXIT_FAILURE;\n               }\n               if (i==nb_frames-1)\n                  int_to_char(rng[nb_packets-1], int_field);\n               else\n                  int_to_char(0, int_field);\n               if (fwrite(int_field, 1, 4, fout)!=4) {\n                  fprintf(stderr, \"Error writing.\\n\");\n                  return EXIT_FAILURE;\n               }\n               if (fwrite(output_packet, 1, err, fout)!=(unsigned)err) {\n                  fprintf(stderr, \"Error writing.\\n\");\n                  return EXIT_FAILURE;\n               }\n               /*fprintf(stderr, \"out len = %d\\n\", err);*/\n            } else {\n               fprintf(stderr, \"opus_repacketizer_out() failed: %s\\n\", opus_strerror(err));\n            }\n\n         }\n      }\n   }\n\n   fclose(fin);\n   fclose(fout);\n   return EXIT_SUCCESS;\n}\n"
  },
  {
    "path": "ThirdParty/opus-1.2.1/src/tansig_table.h",
    "content": "/* This file is auto-generated by gen_tables */\n\nstatic const float tansig_table[201] = {\n0.000000f, 0.039979f, 0.079830f, 0.119427f, 0.158649f,\n0.197375f, 0.235496f, 0.272905f, 0.309507f, 0.345214f,\n0.379949f, 0.413644f, 0.446244f, 0.477700f, 0.507977f,\n0.537050f, 0.564900f, 0.591519f, 0.616909f, 0.641077f,\n0.664037f, 0.685809f, 0.706419f, 0.725897f, 0.744277f,\n0.761594f, 0.777888f, 0.793199f, 0.807569f, 0.821040f,\n0.833655f, 0.845456f, 0.856485f, 0.866784f, 0.876393f,\n0.885352f, 0.893698f, 0.901468f, 0.908698f, 0.915420f,\n0.921669f, 0.927473f, 0.932862f, 0.937863f, 0.942503f,\n0.946806f, 0.950795f, 0.954492f, 0.957917f, 0.961090f,\n0.964028f, 0.966747f, 0.969265f, 0.971594f, 0.973749f,\n0.975743f, 0.977587f, 0.979293f, 0.980869f, 0.982327f,\n0.983675f, 0.984921f, 0.986072f, 0.987136f, 0.988119f,\n0.989027f, 0.989867f, 0.990642f, 0.991359f, 0.992020f,\n0.992631f, 0.993196f, 0.993718f, 0.994199f, 0.994644f,\n0.995055f, 0.995434f, 0.995784f, 0.996108f, 0.996407f,\n0.996682f, 0.996937f, 0.997172f, 0.997389f, 0.997590f,\n0.997775f, 0.997946f, 0.998104f, 0.998249f, 0.998384f,\n0.998508f, 0.998623f, 0.998728f, 0.998826f, 0.998916f,\n0.999000f, 0.999076f, 0.999147f, 0.999213f, 0.999273f,\n0.999329f, 0.999381f, 0.999428f, 0.999472f, 0.999513f,\n0.999550f, 0.999585f, 0.999617f, 0.999646f, 0.999673f,\n0.999699f, 0.999722f, 0.999743f, 0.999763f, 0.999781f,\n0.999798f, 0.999813f, 0.999828f, 0.999841f, 0.999853f,\n0.999865f, 0.999875f, 0.999885f, 0.999893f, 0.999902f,\n0.999909f, 0.999916f, 0.999923f, 0.999929f, 0.999934f,\n0.999939f, 0.999944f, 0.999948f, 0.999952f, 0.999956f,\n0.999959f, 0.999962f, 0.999965f, 0.999968f, 0.999970f,\n0.999973f, 0.999975f, 0.999977f, 0.999978f, 0.999980f,\n0.999982f, 0.999983f, 0.999984f, 0.999986f, 0.999987f,\n0.999988f, 0.999989f, 0.999990f, 0.999990f, 0.999991f,\n0.999992f, 0.999992f, 0.999993f, 0.999994f, 0.999994f,\n0.999994f, 0.999995f, 0.999995f, 0.999996f, 0.999996f,\n0.999996f, 0.999997f, 0.999997f, 0.999997f, 0.999997f,\n0.999997f, 0.999998f, 0.999998f, 0.999998f, 0.999998f,\n0.999998f, 0.999998f, 0.999999f, 0.999999f, 0.999999f,\n0.999999f, 0.999999f, 0.999999f, 0.999999f, 0.999999f,\n0.999999f, 0.999999f, 0.999999f, 0.999999f, 0.999999f,\n1.000000f, 1.000000f, 1.000000f, 1.000000f, 1.000000f,\n1.000000f, 1.000000f, 1.000000f, 1.000000f, 1.000000f,\n1.000000f,\n};\n"
  },
  {
    "path": "ThirdParty/opusfile-0.9/AUTHORS",
    "content": "Timothy B. Terriberry <tterribe@xiph.org>\nRalph Giles <giles@xiph.org>\nChristopher \"Monty\" Montgomery <xiphmont@xiph.org> (original libvorbisfile)\nGregory Maxwell <greg@xiph.org> (noise shaping dithering)\nnu774 <honeycomb77@gmail.com> (original winsock support)\n"
  },
  {
    "path": "ThirdParty/opusfile-0.9/include/opusfile.h",
    "content": "/********************************************************************\n *                                                                  *\n * THIS FILE IS PART OF THE libopusfile SOFTWARE CODEC SOURCE CODE. *\n * USE, DISTRIBUTION AND REPRODUCTION OF THIS LIBRARY SOURCE IS     *\n * GOVERNED BY A BSD-STYLE SOURCE LICENSE INCLUDED WITH THIS SOURCE *\n * IN 'COPYING'. PLEASE READ THESE TERMS BEFORE DISTRIBUTING.       *\n *                                                                  *\n * THE libopusfile SOURCE CODE IS (C) COPYRIGHT 1994-2012           *\n * by the Xiph.Org Foundation and contributors http://www.xiph.org/ *\n *                                                                  *\n ********************************************************************\n\n function: stdio-based convenience library for opening/seeking/decoding\n last mod: $Id: vorbisfile.h 17182 2010-04-29 03:48:32Z xiphmont $\n\n ********************************************************************/\n#if !defined(_opusfile_h)\n# define _opusfile_h (1)\n\n/**\\mainpage\n   \\section Introduction\n\n   This is the documentation for the <tt>libopusfile</tt> C API.\n\n   The <tt>libopusfile</tt> package provides a convenient high-level API for\n    decoding and basic manipulation of all Ogg Opus audio streams.\n   <tt>libopusfile</tt> is implemented as a layer on top of Xiph.Org's\n    reference\n    <tt><a href=\"https://www.xiph.org/ogg/doc/libogg/reference.html\">libogg</a></tt>\n    and\n    <tt><a href=\"https://mf4.xiph.org/jenkins/view/opus/job/opus/ws/doc/html/index.html\">libopus</a></tt>\n    libraries.\n\n   <tt>libopusfile</tt> provides several sets of built-in routines for\n    file/stream access, and may also use custom stream I/O routines provided by\n    the embedded environment.\n   There are built-in I/O routines provided for ANSI-compliant\n    <code>stdio</code> (<code>FILE *</code>), memory buffers, and URLs\n    (including <file:> URLs, plus optionally <http:> and <https:> URLs).\n\n   \\section Organization\n\n   The main API is divided into several sections:\n   - \\ref stream_open_close\n   - \\ref stream_info\n   - \\ref stream_decoding\n   - \\ref stream_seeking\n\n   Several additional sections are not tied to the main API.\n   - \\ref stream_callbacks\n   - \\ref header_info\n   - \\ref error_codes\n\n   \\section Overview\n\n   The <tt>libopusfile</tt> API always decodes files to 48&nbsp;kHz.\n   The original sample rate is not preserved by the lossy compression, though\n    it is stored in the header to allow you to resample to it after decoding\n    (the <tt>libopusfile</tt> API does not currently provide a resampler,\n    but the\n    <a href=\"http://www.speex.org/docs/manual/speex-manual/node7.html#SECTION00760000000000000000\">the\n    Speex resampler</a> is a good choice if you need one).\n   In general, if you are playing back the audio, you should leave it at\n    48&nbsp;kHz, provided your audio hardware supports it.\n   When decoding to a file, it may be worth resampling back to the original\n    sample rate, so as not to surprise users who might not expect the sample\n    rate to change after encoding to Opus and decoding.\n\n   Opus files can contain anywhere from 1 to 255 channels of audio.\n   The channel mappings for up to 8 channels are the same as the\n    <a href=\"http://www.xiph.org/vorbis/doc/Vorbis_I_spec.html#x1-800004.3.9\">Vorbis\n    mappings</a>.\n   A special stereo API can convert everything to 2 channels, making it simple\n    to support multichannel files in an application which only has stereo\n    output.\n   Although the <tt>libopusfile</tt> ABI provides support for the theoretical\n    maximum number of channels, the current implementation does not support\n    files with more than 8 channels, as they do not have well-defined channel\n    mappings.\n\n   Like all Ogg files, Opus files may be \"chained\".\n   That is, multiple Opus files may be combined into a single, longer file just\n    by concatenating the original files.\n   This is commonly done in internet radio streaming, as it allows the title\n    and artist to be updated each time the song changes, since each link in the\n    chain includes its own set of metadata.\n\n   <tt>libopusfile</tt> fully supports chained files.\n   It will decode the first Opus stream found in each link of a chained file\n    (ignoring any other streams that might be concurrently multiplexed with it,\n    such as a video stream).\n\n   The channel count can also change between links.\n   If your application is not prepared to deal with this, it can use the stereo\n    API to ensure the audio from all links will always get decoded into a\n    common format.\n   Since <tt>libopusfile</tt> always decodes to 48&nbsp;kHz, you do not have to\n    worry about the sample rate changing between links (as was possible with\n    Vorbis).\n   This makes application support for chained files with <tt>libopusfile</tt>\n    very easy.*/\n\n# if defined(__cplusplus)\nextern \"C\" {\n# endif\n\n# include <stdarg.h>\n# include <stdio.h>\n# include <ogg/ogg.h>\n# include <opus_multistream.h>\n\n/**@cond PRIVATE*/\n\n/*Enable special features for gcc and gcc-compatible compilers.*/\n# if !defined(OP_GNUC_PREREQ)\n#  if defined(__GNUC__)&&defined(__GNUC_MINOR__)\n#   define OP_GNUC_PREREQ(_maj,_min) \\\n ((__GNUC__<<16)+__GNUC_MINOR__>=((_maj)<<16)+(_min))\n#  else\n#   define OP_GNUC_PREREQ(_maj,_min) 0\n#  endif\n# endif\n\n# if OP_GNUC_PREREQ(4,0)\n#  pragma GCC visibility push(default)\n# endif\n\ntypedef struct OpusHead          OpusHead;\ntypedef struct OpusTags          OpusTags;\ntypedef struct OpusPictureTag    OpusPictureTag;\ntypedef struct OpusServerInfo    OpusServerInfo;\ntypedef struct OpusFileCallbacks OpusFileCallbacks;\ntypedef struct OggOpusFile       OggOpusFile;\n\n/*Warning attributes for libopusfile functions.*/\n# if OP_GNUC_PREREQ(3,4)\n#  define OP_WARN_UNUSED_RESULT __attribute__((__warn_unused_result__))\n# else\n#  define OP_WARN_UNUSED_RESULT\n# endif\n# if OP_GNUC_PREREQ(3,4)\n#  define OP_ARG_NONNULL(_x) __attribute__((__nonnull__(_x)))\n# else\n#  define OP_ARG_NONNULL(_x)\n# endif\n\n/**@endcond*/\n\n/**\\defgroup error_codes Error Codes*/\n/*@{*/\n/**\\name List of possible error codes\n   Many of the functions in this library return a negative error code when a\n    function fails.\n   This list provides a brief explanation of the common errors.\n   See each individual function for more details on what a specific error code\n    means in that context.*/\n/*@{*/\n\n/**A request did not succeed.*/\n#define OP_FALSE         (-1)\n/*Currently not used externally.*/\n#define OP_EOF           (-2)\n/**There was a hole in the page sequence numbers (e.g., a page was corrupt or\n    missing).*/\n#define OP_HOLE          (-3)\n/**An underlying read, seek, or tell operation failed when it should have\n    succeeded.*/\n#define OP_EREAD         (-128)\n/**A <code>NULL</code> pointer was passed where one was unexpected, or an\n    internal memory allocation failed, or an internal library error was\n    encountered.*/\n#define OP_EFAULT        (-129)\n/**The stream used a feature that is not implemented, such as an unsupported\n    channel family.*/\n#define OP_EIMPL         (-130)\n/**One or more parameters to a function were invalid.*/\n#define OP_EINVAL        (-131)\n/**A purported Ogg Opus stream did not begin with an Ogg page, a purported\n    header packet did not start with one of the required strings, \"OpusHead\" or\n    \"OpusTags\", or a link in a chained file was encountered that did not\n    contain any logical Opus streams.*/\n#define OP_ENOTFORMAT    (-132)\n/**A required header packet was not properly formatted, contained illegal\n    values, or was missing altogether.*/\n#define OP_EBADHEADER    (-133)\n/**The ID header contained an unrecognized version number.*/\n#define OP_EVERSION      (-134)\n/*Currently not used at all.*/\n#define OP_ENOTAUDIO     (-135)\n/**An audio packet failed to decode properly.\n   This is usually caused by a multistream Ogg packet where the durations of\n    the individual Opus packets contained in it are not all the same.*/\n#define OP_EBADPACKET    (-136)\n/**We failed to find data we had seen before, or the bitstream structure was\n    sufficiently malformed that seeking to the target destination was\n    impossible.*/\n#define OP_EBADLINK      (-137)\n/**An operation that requires seeking was requested on an unseekable stream.*/\n#define OP_ENOSEEK       (-138)\n/**The first or last granule position of a link failed basic validity checks.*/\n#define OP_EBADTIMESTAMP (-139)\n\n/*@}*/\n/*@}*/\n\n/**\\defgroup header_info Header Information*/\n/*@{*/\n\n/**The maximum number of channels in an Ogg Opus stream.*/\n#define OPUS_CHANNEL_COUNT_MAX (255)\n\n/**Ogg Opus bitstream information.\n   This contains the basic playback parameters for a stream, and corresponds to\n    the initial ID header packet of an Ogg Opus stream.*/\nstruct OpusHead{\n  /**The Ogg Opus format version, in the range 0...255.\n     The top 4 bits represent a \"major\" version, and the bottom four bits\n      represent backwards-compatible \"minor\" revisions.\n     The current specification describes version 1.\n     This library will recognize versions up through 15 as backwards compatible\n      with the current specification.\n     An earlier draft of the specification described a version 0, but the only\n      difference between version 1 and version 0 is that version 0 did\n      not specify the semantics for handling the version field.*/\n  int           version;\n  /**The number of channels, in the range 1...255.*/\n  int           channel_count;\n  /**The number of samples that should be discarded from the beginning of the\n      stream.*/\n  unsigned      pre_skip;\n  /**The sampling rate of the original input.\n     All Opus audio is coded at 48 kHz, and should also be decoded at 48 kHz\n      for playback (unless the target hardware does not support this sampling\n      rate).\n     However, this field may be used to resample the audio back to the original\n      sampling rate, for example, when saving the output to a file.*/\n  opus_uint32   input_sample_rate;\n  /**The gain to apply to the decoded output, in dB, as a Q8 value in the range\n      -32768...32767.\n     The <tt>libopusfile</tt> API will automatically apply this gain to the\n      decoded output before returning it, scaling it by\n      <code>pow(10,output_gain/(20.0*256))</code>.\n     You can adjust this behavior with op_set_gain_offset().*/\n  int           output_gain;\n  /**The channel mapping family, in the range 0...255.\n     Channel mapping family 0 covers mono or stereo in a single stream.\n     Channel mapping family 1 covers 1 to 8 channels in one or more streams,\n      using the Vorbis speaker assignments.\n     Channel mapping family 255 covers 1 to 255 channels in one or more\n      streams, but without any defined speaker assignment.*/\n  int           mapping_family;\n  /**The number of Opus streams in each Ogg packet, in the range 1...255.*/\n  int           stream_count;\n  /**The number of coupled Opus streams in each Ogg packet, in the range\n      0...127.\n     This must satisfy <code>0 <= coupled_count <= stream_count</code> and\n      <code>coupled_count + stream_count <= 255</code>.\n     The coupled streams appear first, before all uncoupled streams, in an Ogg\n      Opus packet.*/\n  int           coupled_count;\n  /**The mapping from coded stream channels to output channels.\n     Let <code>index=mapping[k]</code> be the value for channel <code>k</code>.\n     If <code>index<2*coupled_count</code>, then it refers to the left channel\n      from stream <code>(index/2)</code> if even, and the right channel from\n      stream <code>(index/2)</code> if odd.\n     Otherwise, it refers to the output of the uncoupled stream\n      <code>(index-coupled_count)</code>.*/\n  unsigned char mapping[OPUS_CHANNEL_COUNT_MAX];\n};\n\n/**The metadata from an Ogg Opus stream.\n\n   This structure holds the in-stream metadata corresponding to the 'comment'\n    header packet of an Ogg Opus stream.\n   The comment header is meant to be used much like someone jotting a quick\n    note on the label of a CD.\n   It should be a short, to the point text note that can be more than a couple\n    words, but not more than a short paragraph.\n\n   The metadata is stored as a series of (tag, value) pairs, in length-encoded\n    string vectors, using the same format as Vorbis (without the final \"framing\n    bit\"), Theora, and Speex, except for the packet header.\n   The first occurrence of the '=' character delimits the tag and value.\n   A particular tag may occur more than once, and order is significant.\n   The character set encoding for the strings is always UTF-8, but the tag\n    names are limited to ASCII, and treated as case-insensitive.\n   See <a href=\"http://www.xiph.org/vorbis/doc/v-comment.html\">the Vorbis\n    comment header specification</a> for details.\n\n   In filling in this structure, <tt>libopusfile</tt> will null-terminate the\n    #user_comments strings for safety.\n   However, the bitstream format itself treats them as 8-bit clean vectors,\n    possibly containing NUL characters, so the #comment_lengths array should be\n    treated as their authoritative length.\n\n   This structure is binary and source-compatible with a\n    <code>vorbis_comment</code>, and pointers to it may be freely cast to\n    <code>vorbis_comment</code> pointers, and vice versa.\n   It is provided as a separate type to avoid introducing a compile-time\n    dependency on the libvorbis headers.*/\nstruct OpusTags{\n  /**The array of comment string vectors.*/\n  char **user_comments;\n  /**An array of the corresponding length of each vector, in bytes.*/\n  int   *comment_lengths;\n  /**The total number of comment streams.*/\n  int    comments;\n  /**The null-terminated vendor string.\n     This identifies the software used to encode the stream.*/\n  char  *vendor;\n};\n\n/**\\name Picture tag image formats*/\n/*@{*/\n\n/**The MIME type was not recognized, or the image data did not match the\n    declared MIME type.*/\n#define OP_PIC_FORMAT_UNKNOWN (-1)\n/**The MIME type indicates the image data is really a URL.*/\n#define OP_PIC_FORMAT_URL     (0)\n/**The image is a JPEG.*/\n#define OP_PIC_FORMAT_JPEG    (1)\n/**The image is a PNG.*/\n#define OP_PIC_FORMAT_PNG     (2)\n/**The image is a GIF.*/\n#define OP_PIC_FORMAT_GIF     (3)\n\n/*@}*/\n\n/**The contents of a METADATA_BLOCK_PICTURE tag.*/\nstruct OpusPictureTag{\n  /**The picture type according to the ID3v2 APIC frame:\n     <ol start=\"0\">\n     <li>Other</li>\n     <li>32x32 pixels 'file icon' (PNG only)</li>\n     <li>Other file icon</li>\n     <li>Cover (front)</li>\n     <li>Cover (back)</li>\n     <li>Leaflet page</li>\n     <li>Media (e.g. label side of CD)</li>\n     <li>Lead artist/lead performer/soloist</li>\n     <li>Artist/performer</li>\n     <li>Conductor</li>\n     <li>Band/Orchestra</li>\n     <li>Composer</li>\n     <li>Lyricist/text writer</li>\n     <li>Recording Location</li>\n     <li>During recording</li>\n     <li>During performance</li>\n     <li>Movie/video screen capture</li>\n     <li>A bright colored fish</li>\n     <li>Illustration</li>\n     <li>Band/artist logotype</li>\n     <li>Publisher/Studio logotype</li>\n     </ol>\n     Others are reserved and should not be used.\n     There may only be one each of picture type 1 and 2 in a file.*/\n  opus_int32     type;\n  /**The MIME type of the picture, in printable ASCII characters 0x20-0x7E.\n     The MIME type may also be <code>\"-->\"</code> to signify that the data part\n      is a URL pointing to the picture instead of the picture data itself.\n     In this case, a terminating NUL is appended to the URL string in #data,\n      but #data_length is set to the length of the string excluding that\n      terminating NUL.*/\n  char          *mime_type;\n  /**The description of the picture, in UTF-8.*/\n  char          *description;\n  /**The width of the picture in pixels.*/\n  opus_uint32    width;\n  /**The height of the picture in pixels.*/\n  opus_uint32    height;\n  /**The color depth of the picture in bits-per-pixel (<em>not</em>\n      bits-per-channel).*/\n  opus_uint32    depth;\n  /**For indexed-color pictures (e.g., GIF), the number of colors used, or 0\n      for non-indexed pictures.*/\n  opus_uint32    colors;\n  /**The length of the picture data in bytes.*/\n  opus_uint32    data_length;\n  /**The binary picture data.*/\n  unsigned char *data;\n  /**The format of the picture data, if known.\n     One of\n     <ul>\n     <li>#OP_PIC_FORMAT_UNKNOWN,</li>\n     <li>#OP_PIC_FORMAT_URL,</li>\n     <li>#OP_PIC_FORMAT_JPEG,</li>\n     <li>#OP_PIC_FORMAT_PNG, or</li>\n     <li>#OP_PIC_FORMAT_GIF.</li>\n     </ul>*/\n  int            format;\n};\n\n/**\\name Functions for manipulating header data\n\n   These functions manipulate the #OpusHead and #OpusTags structures,\n    which describe the audio parameters and tag-value metadata, respectively.\n   These can be used to query the headers returned by <tt>libopusfile</tt>, or\n    to parse Opus headers from sources other than an Ogg Opus stream, provided\n    they use the same format.*/\n/*@{*/\n\n/**Parses the contents of the ID header packet of an Ogg Opus stream.\n   \\param[out] _head Returns the contents of the parsed packet.\n                     The contents of this structure are untouched on error.\n                     This may be <code>NULL</code> to merely test the header\n                      for validity.\n   \\param[in]  _data The contents of the ID header packet.\n   \\param      _len  The number of bytes of data in the ID header packet.\n   \\return 0 on success or a negative value on error.\n   \\retval #OP_ENOTFORMAT If the data does not start with the \"OpusHead\"\n                           string.\n   \\retval #OP_EVERSION   If the version field signaled a version this library\n                           does not know how to parse.\n   \\retval #OP_EIMPL      If the channel mapping family was 255, which general\n                           purpose players should not attempt to play.\n   \\retval #OP_EBADHEADER If the contents of the packet otherwise violate the\n                           Ogg Opus specification:\n                          <ul>\n                           <li>Insufficient data,</li>\n                           <li>Too much data for the known minor versions,</li>\n                           <li>An unrecognized channel mapping family,</li>\n                           <li>Zero channels or too many channels,</li>\n                           <li>Zero coded streams,</li>\n                           <li>Too many coupled streams, or</li>\n                           <li>An invalid channel mapping index.</li>\n                          </ul>*/\nOP_WARN_UNUSED_RESULT int opus_head_parse(OpusHead *_head,\n const unsigned char *_data,size_t _len) OP_ARG_NONNULL(2);\n\n/**Converts a granule position to a sample offset for a given Ogg Opus stream.\n   The sample offset is simply <code>_gp-_head->pre_skip</code>.\n   Granule position values smaller than OpusHead#pre_skip correspond to audio\n    that should never be played, and thus have no associated sample offset.\n   This function returns -1 for such values.\n   This function also correctly handles extremely large granule positions,\n    which may have wrapped around to a negative number when stored in a signed\n    ogg_int64_t value.\n   \\param _head The #OpusHead information from the ID header of the stream.\n   \\param _gp   The granule position to convert.\n   \\return The sample offset associated with the given granule position\n            (counting at a 48 kHz sampling rate), or the special value -1 on\n            error (i.e., the granule position was smaller than the pre-skip\n            amount).*/\nogg_int64_t opus_granule_sample(const OpusHead *_head,ogg_int64_t _gp)\n OP_ARG_NONNULL(1);\n\n/**Parses the contents of the 'comment' header packet of an Ogg Opus stream.\n   \\param[out] _tags An uninitialized #OpusTags structure.\n                     This returns the contents of the parsed packet.\n                     The contents of this structure are untouched on error.\n                     This may be <code>NULL</code> to merely test the header\n                      for validity.\n   \\param[in]  _data The contents of the 'comment' header packet.\n   \\param      _len  The number of bytes of data in the 'info' header packet.\n   \\retval 0              Success.\n   \\retval #OP_ENOTFORMAT If the data does not start with the \"OpusTags\"\n                           string.\n   \\retval #OP_EBADHEADER If the contents of the packet otherwise violate the\n                           Ogg Opus specification.\n   \\retval #OP_EFAULT     If there wasn't enough memory to store the tags.*/\nOP_WARN_UNUSED_RESULT int opus_tags_parse(OpusTags *_tags,\n const unsigned char *_data,size_t _len) OP_ARG_NONNULL(2);\n\n/**Performs a deep copy of an #OpusTags structure.\n   \\param _dst The #OpusTags structure to copy into.\n               If this function fails, the contents of this structure remain\n                untouched.\n   \\param _src The #OpusTags structure to copy from.\n   \\retval 0          Success.\n   \\retval #OP_EFAULT If there wasn't enough memory to copy the tags.*/\nint opus_tags_copy(OpusTags *_dst,const OpusTags *_src) OP_ARG_NONNULL(1);\n\n/**Initializes an #OpusTags structure.\n   This should be called on a freshly allocated #OpusTags structure before\n    attempting to use it.\n   \\param _tags The #OpusTags structure to initialize.*/\nvoid opus_tags_init(OpusTags *_tags) OP_ARG_NONNULL(1);\n\n/**Add a (tag, value) pair to an initialized #OpusTags structure.\n   \\note Neither opus_tags_add() nor opus_tags_add_comment() support values\n    containing embedded NULs, although the bitstream format does support them.\n   To add such tags, you will need to manipulate the #OpusTags structure\n    directly.\n   \\param _tags  The #OpusTags structure to add the (tag, value) pair to.\n   \\param _tag   A NUL-terminated, case-insensitive, ASCII string containing\n                  the tag to add (without an '=' character).\n   \\param _value A NUL-terminated UTF-8 containing the corresponding value.\n   \\return 0 on success, or a negative value on failure.\n   \\retval #OP_EFAULT An internal memory allocation failed.*/\nint opus_tags_add(OpusTags *_tags,const char *_tag,const char *_value)\n OP_ARG_NONNULL(1) OP_ARG_NONNULL(2) OP_ARG_NONNULL(3);\n\n/**Add a comment to an initialized #OpusTags structure.\n   \\note Neither opus_tags_add_comment() nor opus_tags_add() support comments\n    containing embedded NULs, although the bitstream format does support them.\n   To add such tags, you will need to manipulate the #OpusTags structure\n    directly.\n   \\param _tags    The #OpusTags structure to add the comment to.\n   \\param _comment A NUL-terminated UTF-8 string containing the comment in\n                    \"TAG=value\" form.\n   \\return 0 on success, or a negative value on failure.\n   \\retval #OP_EFAULT An internal memory allocation failed.*/\nint opus_tags_add_comment(OpusTags *_tags,const char *_comment)\n OP_ARG_NONNULL(1) OP_ARG_NONNULL(2);\n\n/**Replace the binary suffix data at the end of the packet (if any).\n   \\param _tags An initialized #OpusTags structure.\n   \\param _data A buffer of binary data to append after the encoded user\n                 comments.\n                The least significant bit of the first byte of this data must\n                 be set (to ensure the data is preserved by other editors).\n   \\param _len  The number of bytes of binary data to append.\n                This may be zero to remove any existing binary suffix data.\n   \\return 0 on success, or a negative value on error.\n   \\retval #OP_EINVAL \\a _len was negative, or \\a _len was positive but\n                       \\a _data was <code>NULL</code> or the least significant\n                       bit of the first byte was not set.\n   \\retval #OP_EFAULT An internal memory allocation failed.*/\nint opus_tags_set_binary_suffix(OpusTags *_tags,\n const unsigned char *_data,int _len) OP_ARG_NONNULL(1);\n\n/**Look up a comment value by its tag.\n   \\param _tags  An initialized #OpusTags structure.\n   \\param _tag   The tag to look up.\n   \\param _count The instance of the tag.\n                 The same tag can appear multiple times, each with a distinct\n                  value, so an index is required to retrieve them all.\n                 The order in which these values appear is significant and\n                  should be preserved.\n                 Use opus_tags_query_count() to get the legal range for the\n                  \\a _count parameter.\n   \\return A pointer to the queried tag's value.\n           This points directly to data in the #OpusTags structure.\n           It should not be modified or freed by the application, and\n            modifications to the structure may invalidate the pointer.\n   \\retval NULL If no matching tag is found.*/\nconst char *opus_tags_query(const OpusTags *_tags,const char *_tag,int _count)\n OP_ARG_NONNULL(1) OP_ARG_NONNULL(2);\n\n/**Look up the number of instances of a tag.\n   Call this first when querying for a specific tag and then iterate over the\n    number of instances with separate calls to opus_tags_query() to retrieve\n    all the values for that tag in order.\n   \\param _tags An initialized #OpusTags structure.\n   \\param _tag  The tag to look up.\n   \\return The number of instances of this particular tag.*/\nint opus_tags_query_count(const OpusTags *_tags,const char *_tag)\n OP_ARG_NONNULL(1) OP_ARG_NONNULL(2);\n\n/**Retrieve the binary suffix data at the end of the packet (if any).\n   \\param      _tags An initialized #OpusTags structure.\n   \\param[out] _len  Returns the number of bytes of binary suffix data returned.\n   \\return A pointer to the binary suffix data, or <code>NULL</code> if none\n            was present.*/\nconst unsigned char *opus_tags_get_binary_suffix(const OpusTags *_tags,\n int *_len) OP_ARG_NONNULL(1) OP_ARG_NONNULL(2);\n\n/**Get the album gain from an R128_ALBUM_GAIN tag, if one was specified.\n   This searches for the first R128_ALBUM_GAIN tag with a valid signed,\n    16-bit decimal integer value and returns the value.\n   This routine is exposed merely for convenience for applications which wish\n    to do something special with the album gain (i.e., display it).\n   If you simply wish to apply the album gain instead of the header gain, you\n    can use op_set_gain_offset() with an #OP_ALBUM_GAIN type and no offset.\n   \\param      _tags    An initialized #OpusTags structure.\n   \\param[out] _gain_q8 The album gain, in 1/256ths of a dB.\n                        This will lie in the range [-32768,32767], and should\n                         be applied in <em>addition</em> to the header gain.\n                        On error, no value is returned, and the previous\n                         contents remain unchanged.\n   \\return 0 on success, or a negative value on error.\n   \\retval #OP_FALSE There was no album gain available in the given tags.*/\nint opus_tags_get_album_gain(const OpusTags *_tags,int *_gain_q8)\n OP_ARG_NONNULL(1) OP_ARG_NONNULL(2);\n\n/**Get the track gain from an R128_TRACK_GAIN tag, if one was specified.\n   This searches for the first R128_TRACK_GAIN tag with a valid signed,\n    16-bit decimal integer value and returns the value.\n   This routine is exposed merely for convenience for applications which wish\n    to do something special with the track gain (i.e., display it).\n   If you simply wish to apply the track gain instead of the header gain, you\n    can use op_set_gain_offset() with an #OP_TRACK_GAIN type and no offset.\n   \\param      _tags    An initialized #OpusTags structure.\n   \\param[out] _gain_q8 The track gain, in 1/256ths of a dB.\n                        This will lie in the range [-32768,32767], and should\n                         be applied in <em>addition</em> to the header gain.\n                        On error, no value is returned, and the previous\n                         contents remain unchanged.\n   \\return 0 on success, or a negative value on error.\n   \\retval #OP_FALSE There was no track gain available in the given tags.*/\nint opus_tags_get_track_gain(const OpusTags *_tags,int *_gain_q8)\n OP_ARG_NONNULL(1) OP_ARG_NONNULL(2);\n\n/**Clears the #OpusTags structure.\n   This should be called on an #OpusTags structure after it is no longer\n    needed.\n   It will free all memory used by the structure members.\n   \\param _tags The #OpusTags structure to clear.*/\nvoid opus_tags_clear(OpusTags *_tags) OP_ARG_NONNULL(1);\n\n/**Check if \\a _comment is an instance of a \\a _tag_name tag.\n   \\see opus_tagncompare\n   \\param _tag_name A NUL-terminated, case-insensitive, ASCII string containing\n                     the name of the tag to check for (without the terminating\n                     '=' character).\n   \\param _comment  The comment string to check.\n   \\return An integer less than, equal to, or greater than zero if \\a _comment\n            is found respectively, to be less than, to match, or be greater\n            than a \"tag=value\" string whose tag matches \\a _tag_name.*/\nint opus_tagcompare(const char *_tag_name,const char *_comment);\n\n/**Check if \\a _comment is an instance of a \\a _tag_name tag.\n   This version is slightly more efficient than opus_tagcompare() if the length\n    of the tag name is already known (e.g., because it is a constant).\n   \\see opus_tagcompare\n   \\param _tag_name A case-insensitive ASCII string containing the name of the\n                     tag to check for (without the terminating '=' character).\n   \\param _tag_len  The number of characters in the tag name.\n                    This must be non-negative.\n   \\param _comment  The comment string to check.\n   \\return An integer less than, equal to, or greater than zero if \\a _comment\n            is found respectively, to be less than, to match, or be greater\n            than a \"tag=value\" string whose tag matches the first \\a _tag_len\n            characters of \\a _tag_name.*/\nint opus_tagncompare(const char *_tag_name,int _tag_len,const char *_comment);\n\n/**Parse a single METADATA_BLOCK_PICTURE tag.\n   This decodes the BASE64-encoded content of the tag and returns a structure\n    with the MIME type, description, image parameters (if known), and the\n    compressed image data.\n   If the MIME type indicates the presence of an image format we recognize\n    (JPEG, PNG, or GIF) and the actual image data contains the magic signature\n    associated with that format, then the OpusPictureTag::format field will be\n    set to the corresponding format.\n   This is provided as a convenience to avoid requiring applications to parse\n    the MIME type and/or do their own format detection for the commonly used\n    formats.\n   In this case, we also attempt to extract the image parameters directly from\n    the image data (overriding any that were present in the tag, which the\n    specification says applications are not meant to rely on).\n   The application must still provide its own support for actually decoding the\n    image data and, if applicable, retrieving that data from URLs.\n   \\param[out] _pic Returns the parsed picture data.\n                    No sanitation is done on the type, MIME type, or\n                     description fields, so these might return invalid values.\n                    The contents of this structure are left unmodified on\n                     failure.\n   \\param      _tag The METADATA_BLOCK_PICTURE tag contents.\n                    The leading \"METADATA_BLOCK_PICTURE=\" portion is optional,\n                     to allow the function to be used on either directly on the\n                     values in OpusTags::user_comments or on the return value\n                     of opus_tags_query().\n   \\return 0 on success or a negative value on error.\n   \\retval #OP_ENOTFORMAT The METADATA_BLOCK_PICTURE contents were not valid.\n   \\retval #OP_EFAULT     There was not enough memory to store the picture tag\n                           contents.*/\nOP_WARN_UNUSED_RESULT int opus_picture_tag_parse(OpusPictureTag *_pic,\n const char *_tag) OP_ARG_NONNULL(1) OP_ARG_NONNULL(2);\n\n/**Initializes an #OpusPictureTag structure.\n   This should be called on a freshly allocated #OpusPictureTag structure\n    before attempting to use it.\n   \\param _pic The #OpusPictureTag structure to initialize.*/\nvoid opus_picture_tag_init(OpusPictureTag *_pic) OP_ARG_NONNULL(1);\n\n/**Clears the #OpusPictureTag structure.\n   This should be called on an #OpusPictureTag structure after it is no longer\n    needed.\n   It will free all memory used by the structure members.\n   \\param _pic The #OpusPictureTag structure to clear.*/\nvoid opus_picture_tag_clear(OpusPictureTag *_pic) OP_ARG_NONNULL(1);\n\n/*@}*/\n\n/*@}*/\n\n/**\\defgroup url_options URL Reading Options*/\n/*@{*/\n/**\\name URL reading options\n   Options for op_url_stream_create() and associated functions.\n   These allow you to provide proxy configuration parameters, skip SSL\n    certificate checks, etc.\n   Options are processed in order, and if the same option is passed multiple\n    times, only the value specified by the last occurrence has an effect\n    (unless otherwise specified).\n   They may be expanded in the future.*/\n/*@{*/\n\n/**@cond PRIVATE*/\n\n/*These are the raw numbers used to define the request codes.\n  They should not be used directly.*/\n#define OP_SSL_SKIP_CERTIFICATE_CHECK_REQUEST (6464)\n#define OP_HTTP_PROXY_HOST_REQUEST            (6528)\n#define OP_HTTP_PROXY_PORT_REQUEST            (6592)\n#define OP_HTTP_PROXY_USER_REQUEST            (6656)\n#define OP_HTTP_PROXY_PASS_REQUEST            (6720)\n#define OP_GET_SERVER_INFO_REQUEST            (6784)\n\n#define OP_URL_OPT(_request) ((_request)+(char *)0)\n\n/*These macros trigger compilation errors or warnings if the wrong types are\n   provided to one of the URL options.*/\n#define OP_CHECK_INT(_x) ((void)((_x)==(opus_int32)0),(opus_int32)(_x))\n#define OP_CHECK_CONST_CHAR_PTR(_x) ((_x)+((_x)-(const char *)(_x)))\n#define OP_CHECK_SERVER_INFO_PTR(_x) ((_x)+((_x)-(OpusServerInfo *)(_x)))\n\n/**@endcond*/\n\n/**HTTP/Shoutcast/Icecast server information associated with a URL.*/\nstruct OpusServerInfo{\n  /**The name of the server (icy-name/ice-name).\n     This is <code>NULL</code> if there was no <code>icy-name</code> or\n      <code>ice-name</code> header.*/\n  char        *name;\n  /**A short description of the server (icy-description/ice-description).\n     This is <code>NULL</code> if there was no <code>icy-description</code> or\n      <code>ice-description</code> header.*/\n  char        *description;\n  /**The genre the server falls under (icy-genre/ice-genre).\n     This is <code>NULL</code> if there was no <code>icy-genre</code> or\n      <code>ice-genre</code> header.*/\n  char        *genre;\n  /**The homepage for the server (icy-url/ice-url).\n     This is <code>NULL</code> if there was no <code>icy-url</code> or\n      <code>ice-url</code> header.*/\n  char        *url;\n  /**The software used by the origin server (Server).\n     This is <code>NULL</code> if there was no <code>Server</code> header.*/\n  char        *server;\n  /**The media type of the entity sent to the recepient (Content-Type).\n     This is <code>NULL</code> if there was no <code>Content-Type</code>\n      header.*/\n  char        *content_type;\n  /**The nominal stream bitrate in kbps (icy-br/ice-bitrate).\n     This is <code>-1</code> if there was no <code>icy-br</code> or\n      <code>ice-bitrate</code> header.*/\n  opus_int32   bitrate_kbps;\n  /**Flag indicating whether the server is public (<code>1</code>) or not\n      (<code>0</code>) (icy-pub/ice-public).\n     This is <code>-1</code> if there was no <code>icy-pub</code> or\n      <code>ice-public</code> header.*/\n  int          is_public;\n  /**Flag indicating whether the server is using HTTPS instead of HTTP.\n     This is <code>0</code> unless HTTPS is being used.\n     This may not match the protocol used in the original URL if there were\n      redirections.*/\n  int          is_ssl;\n};\n\n/**Initializes an #OpusServerInfo structure.\n   All fields are set as if the corresponding header was not available.\n   \\param _info The #OpusServerInfo structure to initialize.\n   \\note If you use this function, you must link against <tt>libopusurl</tt>.*/\nvoid opus_server_info_init(OpusServerInfo *_info) OP_ARG_NONNULL(1);\n\n/**Clears the #OpusServerInfo structure.\n   This should be called on an #OpusServerInfo structure after it is no longer\n    needed.\n   It will free all memory used by the structure members.\n   \\param _info The #OpusServerInfo structure to clear.\n   \\note If you use this function, you must link against <tt>libopusurl</tt>.*/\nvoid opus_server_info_clear(OpusServerInfo *_info) OP_ARG_NONNULL(1);\n\n/**Skip the certificate check when connecting via TLS/SSL (https).\n   \\param _b <code>opus_int32</code>: Whether or not to skip the certificate\n              check.\n             The check will be skipped if \\a _b is non-zero, and will not be\n              skipped if \\a _b is zero.\n   \\hideinitializer*/\n#define OP_SSL_SKIP_CERTIFICATE_CHECK(_b) \\\n OP_URL_OPT(OP_SSL_SKIP_CERTIFICATE_CHECK_REQUEST),OP_CHECK_INT(_b)\n\n/**Proxy connections through the given host.\n   If no port is specified via #OP_HTTP_PROXY_PORT, the port number defaults\n    to 8080 (http-alt).\n   All proxy parameters are ignored for non-http and non-https URLs.\n   \\param _host <code>const char *</code>: The proxy server hostname.\n                This may be <code>NULL</code> to disable the use of a proxy\n                 server.\n   \\hideinitializer*/\n#define OP_HTTP_PROXY_HOST(_host) \\\n OP_URL_OPT(OP_HTTP_PROXY_HOST_REQUEST),OP_CHECK_CONST_CHAR_PTR(_host)\n\n/**Use the given port when proxying connections.\n   This option only has an effect if #OP_HTTP_PROXY_HOST is specified with a\n    non-<code>NULL</code> \\a _host.\n   If this option is not provided, the proxy port number defaults to 8080\n    (http-alt).\n   All proxy parameters are ignored for non-http and non-https URLs.\n   \\param _port <code>opus_int32</code>: The proxy server port.\n                This must be in the range 0...65535 (inclusive), or the\n                 URL function this is passed to will fail.\n   \\hideinitializer*/\n#define OP_HTTP_PROXY_PORT(_port) \\\n OP_URL_OPT(OP_HTTP_PROXY_PORT_REQUEST),OP_CHECK_INT(_port)\n\n/**Use the given user name for authentication when proxying connections.\n   All proxy parameters are ignored for non-http and non-https URLs.\n   \\param _user const char *: The proxy server user name.\n                              This may be <code>NULL</code> to disable proxy\n                               authentication.\n                              A non-<code>NULL</code> value only has an effect\n                               if #OP_HTTP_PROXY_HOST and #OP_HTTP_PROXY_PASS\n                               are also specified with non-<code>NULL</code>\n                               arguments.\n   \\hideinitializer*/\n#define OP_HTTP_PROXY_USER(_user) \\\n OP_URL_OPT(OP_HTTP_PROXY_USER_REQUEST),OP_CHECK_CONST_CHAR_PTR(_user)\n\n/**Use the given password for authentication when proxying connections.\n   All proxy parameters are ignored for non-http and non-https URLs.\n   \\param _pass const char *: The proxy server password.\n                              This may be <code>NULL</code> to disable proxy\n                               authentication.\n                              A non-<code>NULL</code> value only has an effect\n                               if #OP_HTTP_PROXY_HOST and #OP_HTTP_PROXY_USER\n                               are also specified with non-<code>NULL</code>\n                               arguments.\n   \\hideinitializer*/\n#define OP_HTTP_PROXY_PASS(_pass) \\\n OP_URL_OPT(OP_HTTP_PROXY_PASS_REQUEST),OP_CHECK_CONST_CHAR_PTR(_pass)\n\n/**Parse information about the streaming server (if any) and return it.\n   Very little validation is done.\n   In particular, OpusServerInfo::url may not be a valid URL,\n    OpusServerInfo::bitrate_kbps may not really be in kbps, and\n    OpusServerInfo::content_type may not be a valid MIME type.\n   The character set of the string fields is not specified anywhere, and should\n    not be assumed to be valid UTF-8.\n   \\param _info OpusServerInfo *: Returns information about the server.\n                                  If there is any error opening the stream, the\n                                   contents of this structure remain\n                                   unmodified.\n                                  On success, fills in the structure with the\n                                   server information that was available, if\n                                   any.\n                                  After a successful return, the contents of\n                                   this structure should be freed by calling\n                                   opus_server_info_clear().\n   \\hideinitializer*/\n#define OP_GET_SERVER_INFO(_info) \\\n OP_URL_OPT(OP_GET_SERVER_INFO_REQUEST),OP_CHECK_SERVER_INFO_PTR(_info)\n\n/*@}*/\n/*@}*/\n\n/**\\defgroup stream_callbacks Abstract Stream Reading Interface*/\n/*@{*/\n/**\\name Functions for reading from streams\n   These functions define the interface used to read from and seek in a stream\n    of data.\n   A stream does not need to implement seeking, but the decoder will not be\n    able to seek if it does not do so.\n   These functions also include some convenience routines for working with\n    standard <code>FILE</code> pointers, complete streams stored in a single\n    block of memory, or URLs.*/\n/*@{*/\n\n/**Reads up to \\a _nbytes bytes of data from \\a _stream.\n   \\param      _stream The stream to read from.\n   \\param[out] _ptr    The buffer to store the data in.\n   \\param      _nbytes The maximum number of bytes to read.\n                       This function may return fewer, though it will not\n                        return zero unless it reaches end-of-file.\n   \\return The number of bytes successfully read, or a negative value on\n            error.*/\ntypedef int (*op_read_func)(void *_stream,unsigned char *_ptr,int _nbytes);\n\n/**Sets the position indicator for \\a _stream.\n   The new position, measured in bytes, is obtained by adding \\a _offset\n    bytes to the position specified by \\a _whence.\n   If \\a _whence is set to <code>SEEK_SET</code>, <code>SEEK_CUR</code>, or\n    <code>SEEK_END</code>, the offset is relative to the start of the stream,\n    the current position indicator, or end-of-file, respectively.\n   \\retval 0  Success.\n   \\retval -1 Seeking is not supported or an error occurred.\n              <code>errno</code> need not be set.*/\ntypedef int (*op_seek_func)(void *_stream,opus_int64 _offset,int _whence);\n\n/**Obtains the current value of the position indicator for \\a _stream.\n   \\return The current position indicator.*/\ntypedef opus_int64 (*op_tell_func)(void *_stream);\n\n/**Closes the underlying stream.\n   \\retval 0   Success.\n   \\retval EOF An error occurred.\n               <code>errno</code> need not be set.*/\ntypedef int (*op_close_func)(void *_stream);\n\n/**The callbacks used to access non-<code>FILE</code> stream resources.\n   The function prototypes are basically the same as for the stdio functions\n    <code>fread()</code>, <code>fseek()</code>, <code>ftell()</code>, and\n    <code>fclose()</code>.\n   The differences are that the <code>FILE *</code> arguments have been\n    replaced with a <code>void *</code>, which is to be used as a pointer to\n    whatever internal data these functions might need, that #seek and #tell\n    take and return 64-bit offsets, and that #seek <em>must</em> return -1 if\n    the stream is unseekable.*/\nstruct OpusFileCallbacks{\n  /**Used to read data from the stream.\n     This must not be <code>NULL</code>.*/\n  op_read_func  read;\n  /**Used to seek in the stream.\n     This may be <code>NULL</code> if seeking is not implemented.*/\n  op_seek_func  seek;\n  /**Used to return the current read position in the stream.\n     This may be <code>NULL</code> if seeking is not implemented.*/\n  op_tell_func  tell;\n  /**Used to close the stream when the decoder is freed.\n     This may be <code>NULL</code> to leave the stream open.*/\n  op_close_func close;\n};\n\n/**Opens a stream with <code>fopen()</code> and fills in a set of callbacks\n    that can be used to access it.\n   This is useful to avoid writing your own portable 64-bit seeking wrappers,\n    and also avoids cross-module linking issues on Windows, where a\n    <code>FILE *</code> must be accessed by routines defined in the same module\n    that opened it.\n   \\param[out] _cb   The callbacks to use for this file.\n                     If there is an error opening the file, nothing will be\n                      filled in here.\n   \\param      _path The path to the file to open.\n                     On Windows, this string must be UTF-8 (to allow access to\n                      files whose names cannot be represented in the current\n                      MBCS code page).\n                     All other systems use the native character encoding.\n   \\param      _mode The mode to open the file in.\n   \\return A stream handle to use with the callbacks, or <code>NULL</code> on\n            error.*/\nOP_WARN_UNUSED_RESULT void *op_fopen(OpusFileCallbacks *_cb,\n const char *_path,const char *_mode) OP_ARG_NONNULL(1) OP_ARG_NONNULL(2)\n OP_ARG_NONNULL(3);\n\n/**Opens a stream with <code>fdopen()</code> and fills in a set of callbacks\n    that can be used to access it.\n   This is useful to avoid writing your own portable 64-bit seeking wrappers,\n    and also avoids cross-module linking issues on Windows, where a\n    <code>FILE *</code> must be accessed by routines defined in the same module\n    that opened it.\n   \\param[out] _cb   The callbacks to use for this file.\n                     If there is an error opening the file, nothing will be\n                      filled in here.\n   \\param      _fd   The file descriptor to open.\n   \\param      _mode The mode to open the file in.\n   \\return A stream handle to use with the callbacks, or <code>NULL</code> on\n            error.*/\nOP_WARN_UNUSED_RESULT void *op_fdopen(OpusFileCallbacks *_cb,\n int _fd,const char *_mode) OP_ARG_NONNULL(1) OP_ARG_NONNULL(3);\n\n/**Opens a stream with <code>freopen()</code> and fills in a set of callbacks\n    that can be used to access it.\n   This is useful to avoid writing your own portable 64-bit seeking wrappers,\n    and also avoids cross-module linking issues on Windows, where a\n    <code>FILE *</code> must be accessed by routines defined in the same module\n    that opened it.\n   \\param[out] _cb     The callbacks to use for this file.\n                       If there is an error opening the file, nothing will be\n                        filled in here.\n   \\param      _path   The path to the file to open.\n                       On Windows, this string must be UTF-8 (to allow access\n                        to files whose names cannot be represented in the\n                        current MBCS code page).\n                       All other systems use the native character encoding.\n   \\param      _mode   The mode to open the file in.\n   \\param      _stream A stream previously returned by op_fopen(), op_fdopen(),\n                        or op_freopen().\n   \\return A stream handle to use with the callbacks, or <code>NULL</code> on\n            error.*/\nOP_WARN_UNUSED_RESULT void *op_freopen(OpusFileCallbacks *_cb,\n const char *_path,const char *_mode,void *_stream) OP_ARG_NONNULL(1)\n OP_ARG_NONNULL(2) OP_ARG_NONNULL(3) OP_ARG_NONNULL(4);\n\n/**Creates a stream that reads from the given block of memory.\n   This block of memory must contain the complete stream to decode.\n   This is useful for caching small streams (e.g., sound effects) in RAM.\n   \\param[out] _cb   The callbacks to use for this stream.\n                     If there is an error creating the stream, nothing will be\n                      filled in here.\n   \\param      _data The block of memory to read from.\n   \\param      _size The size of the block of memory.\n   \\return A stream handle to use with the callbacks, or <code>NULL</code> on\n            error.*/\nOP_WARN_UNUSED_RESULT void *op_mem_stream_create(OpusFileCallbacks *_cb,\n const unsigned char *_data,size_t _size) OP_ARG_NONNULL(1);\n\n/**Creates a stream that reads from the given URL.\n   This function behaves identically to op_url_stream_create(), except that it\n    takes a va_list instead of a variable number of arguments.\n   It does not call the <code>va_end</code> macro, and because it invokes the\n    <code>va_arg</code> macro, the value of \\a _ap is undefined after the call.\n   \\note If you use this function, you must link against <tt>libopusurl</tt>.\n   \\param[out]    _cb  The callbacks to use for this stream.\n                       If there is an error creating the stream, nothing will\n                        be filled in here.\n   \\param         _url The URL to read from.\n                       Currently only the <file:>, <http:>, and <https:>\n                        schemes are supported.\n                       Both <http:> and <https:> may be disabled at compile\n                        time, in which case opening such URLs will always fail.\n                       Currently this only supports URIs.\n                       IRIs should be converted to UTF-8 and URL-escaped, with\n                        internationalized domain names encoded in punycode,\n                        before passing them to this function.\n   \\param[in,out] _ap  A list of the \\ref url_options \"optional flags\" to use.\n                       This is a variable-length list of options terminated\n                        with <code>NULL</code>.\n   \\return A stream handle to use with the callbacks, or <code>NULL</code> on\n            error.*/\nOP_WARN_UNUSED_RESULT void *op_url_stream_vcreate(OpusFileCallbacks *_cb,\n const char *_url,va_list _ap) OP_ARG_NONNULL(1) OP_ARG_NONNULL(2);\n\n/**Creates a stream that reads from the given URL.\n   \\note If you use this function, you must link against <tt>libopusurl</tt>.\n   \\param[out] _cb  The callbacks to use for this stream.\n                    If there is an error creating the stream, nothing will be\n                     filled in here.\n   \\param      _url The URL to read from.\n                    Currently only the <file:>, <http:>, and <https:> schemes\n                     are supported.\n                    Both <http:> and <https:> may be disabled at compile time,\n                     in which case opening such URLs will always fail.\n                    Currently this only supports URIs.\n                    IRIs should be converted to UTF-8 and URL-escaped, with\n                     internationalized domain names encoded in punycode, before\n                     passing them to this function.\n   \\param      ...  The \\ref url_options \"optional flags\" to use.\n                    This is a variable-length list of options terminated with\n                     <code>NULL</code>.\n   \\return A stream handle to use with the callbacks, or <code>NULL</code> on\n            error.*/\nOP_WARN_UNUSED_RESULT void *op_url_stream_create(OpusFileCallbacks *_cb,\n const char *_url,...) OP_ARG_NONNULL(1) OP_ARG_NONNULL(2);\n\n/*@}*/\n/*@}*/\n\n/**\\defgroup stream_open_close Opening and Closing*/\n/*@{*/\n/**\\name Functions for opening and closing streams\n\n   These functions allow you to test a stream to see if it is Opus, open it,\n    and close it.\n   Several flavors are provided for each of the built-in stream types, plus a\n    more general version which takes a set of application-provided callbacks.*/\n/*@{*/\n\n/**Test to see if this is an Opus stream.\n   For good results, you will need at least 57 bytes (for a pure Opus-only\n    stream).\n   Something like 512 bytes will give more reliable results for multiplexed\n    streams.\n   This function is meant to be a quick-rejection filter.\n   Its purpose is not to guarantee that a stream is a valid Opus stream, but to\n    ensure that it looks enough like Opus that it isn't going to be recognized\n    as some other format (except possibly an Opus stream that is also\n    multiplexed with other codecs, such as video).\n   \\param[out] _head     The parsed ID header contents.\n                         You may pass <code>NULL</code> if you do not need\n                          this information.\n                         If the function fails, the contents of this structure\n                          remain untouched.\n   \\param _initial_data  An initial buffer of data from the start of the\n                          stream.\n   \\param _initial_bytes The number of bytes in \\a _initial_data.\n   \\return 0 if the data appears to be Opus, or a negative value on error.\n   \\retval #OP_FALSE      There was not enough data to tell if this was an Opus\n                           stream or not.\n   \\retval #OP_EFAULT     An internal memory allocation failed.\n   \\retval #OP_EIMPL      The stream used a feature that is not implemented,\n                           such as an unsupported channel family.\n   \\retval #OP_ENOTFORMAT If the data did not contain a recognizable ID\n                           header for an Opus stream.\n   \\retval #OP_EVERSION   If the version field signaled a version this library\n                           does not know how to parse.\n   \\retval #OP_EBADHEADER The ID header was not properly formatted or contained\n                           illegal values.*/\nint op_test(OpusHead *_head,\n const unsigned char *_initial_data,size_t _initial_bytes);\n\n/**Open a stream from the given file path.\n   \\param      _path  The path to the file to open.\n   \\param[out] _error Returns 0 on success, or a failure code on error.\n                      You may pass in <code>NULL</code> if you don't want the\n                       failure code.\n                      The failure code will be #OP_EFAULT if the file could not\n                       be opened, or one of the other failure codes from\n                       op_open_callbacks() otherwise.\n   \\return A freshly opened \\c OggOpusFile, or <code>NULL</code> on error.*/\nOP_WARN_UNUSED_RESULT OggOpusFile *op_open_file(const char *_path,int *_error)\n OP_ARG_NONNULL(1);\n\n/**Open a stream from a memory buffer.\n   \\param      _data  The memory buffer to open.\n   \\param      _size  The number of bytes in the buffer.\n   \\param[out] _error Returns 0 on success, or a failure code on error.\n                      You may pass in <code>NULL</code> if you don't want the\n                       failure code.\n                      See op_open_callbacks() for a full list of failure codes.\n   \\return A freshly opened \\c OggOpusFile, or <code>NULL</code> on error.*/\nOP_WARN_UNUSED_RESULT OggOpusFile *op_open_memory(const unsigned char *_data,\n size_t _size,int *_error);\n\n/**Open a stream from a URL.\n   This function behaves identically to op_open_url(), except that it\n    takes a va_list instead of a variable number of arguments.\n   It does not call the <code>va_end</code> macro, and because it invokes the\n    <code>va_arg</code> macro, the value of \\a _ap is undefined after the call.\n   \\note If you use this function, you must link against <tt>libopusurl</tt>.\n   \\param         _url   The URL to open.\n                         Currently only the <file:>, <http:>, and <https:>\n                          schemes are supported.\n                         Both <http:> and <https:> may be disabled at compile\n                          time, in which case opening such URLs will always\n                          fail.\n                         Currently this only supports URIs.\n                         IRIs should be converted to UTF-8 and URL-escaped,\n                          with internationalized domain names encoded in\n                          punycode, before passing them to this function.\n   \\param[out]    _error Returns 0 on success, or a failure code on error.\n                         You may pass in <code>NULL</code> if you don't want\n                          the failure code.\n                         See op_open_callbacks() for a full list of failure\n                          codes.\n   \\param[in,out] _ap    A list of the \\ref url_options \"optional flags\" to\n                          use.\n                         This is a variable-length list of options terminated\n                          with <code>NULL</code>.\n   \\return A freshly opened \\c OggOpusFile, or <code>NULL</code> on error.*/\nOP_WARN_UNUSED_RESULT OggOpusFile *op_vopen_url(const char *_url,\n int *_error,va_list _ap) OP_ARG_NONNULL(1);\n\n/**Open a stream from a URL.\n   \\note If you use this function, you must link against <tt>libopusurl</tt>.\n   \\param      _url   The URL to open.\n                      Currently only the <file:>, <http:>, and <https:> schemes\n                       are supported.\n                      Both <http:> and <https:> may be disabled at compile\n                       time, in which case opening such URLs will always fail.\n                      Currently this only supports URIs.\n                      IRIs should be converted to UTF-8 and URL-escaped, with\n                       internationalized domain names encoded in punycode,\n                       before passing them to this function.\n   \\param[out] _error Returns 0 on success, or a failure code on error.\n                      You may pass in <code>NULL</code> if you don't want the\n                       failure code.\n                      See op_open_callbacks() for a full list of failure codes.\n   \\param      ...    The \\ref url_options \"optional flags\" to use.\n                      This is a variable-length list of options terminated with\n                       <code>NULL</code>.\n   \\return A freshly opened \\c OggOpusFile, or <code>NULL</code> on error.*/\nOP_WARN_UNUSED_RESULT OggOpusFile *op_open_url(const char *_url,\n int *_error,...) OP_ARG_NONNULL(1);\n\n/**Open a stream using the given set of callbacks to access it.\n   \\param _stream        The stream to read from (e.g., a <code>FILE *</code>).\n                         This value will be passed verbatim as the first\n                          argument to all of the callbacks.\n   \\param _cb            The callbacks with which to access the stream.\n                         <code><a href=\"#op_read_func\">read()</a></code> must\n                          be implemented.\n                         <code><a href=\"#op_seek_func\">seek()</a></code> and\n                          <code><a href=\"#op_tell_func\">tell()</a></code> may\n                          be <code>NULL</code>, or may always return -1 to\n                          indicate a stream is unseekable, but if\n                          <code><a href=\"#op_seek_func\">seek()</a></code> is\n                          implemented and succeeds on a particular stream, then\n                          <code><a href=\"#op_tell_func\">tell()</a></code> must\n                          also.\n                         <code><a href=\"#op_close_func\">close()</a></code> may\n                          be <code>NULL</code>, but if it is not, it will be\n                          called when the \\c OggOpusFile is destroyed by\n                          op_free().\n                         It will not be called if op_open_callbacks() fails\n                          with an error.\n   \\param _initial_data  An initial buffer of data from the start of the\n                          stream.\n                         Applications can read some number of bytes from the\n                          start of the stream to help identify this as an Opus\n                          stream, and then provide them here to allow the\n                          stream to be opened, even if it is unseekable.\n   \\param _initial_bytes The number of bytes in \\a _initial_data.\n                         If the stream is seekable, its current position (as\n                          reported by\n                          <code><a href=\"#opus_tell_func\">tell()</a></code>\n                          at the start of this function) must be equal to\n                          \\a _initial_bytes.\n                         Otherwise, seeking to absolute positions will\n                          generate inconsistent results.\n   \\param[out] _error    Returns 0 on success, or a failure code on error.\n                         You may pass in <code>NULL</code> if you don't want\n                          the failure code.\n                         The failure code will be one of\n                         <dl>\n                           <dt>#OP_EREAD</dt>\n                           <dd>An underlying read, seek, or tell operation\n                            failed when it should have succeeded, or we failed\n                            to find data in the stream we had seen before.</dd>\n                           <dt>#OP_EFAULT</dt>\n                           <dd>There was a memory allocation failure, or an\n                            internal library error.</dd>\n                           <dt>#OP_EIMPL</dt>\n                           <dd>The stream used a feature that is not\n                            implemented, such as an unsupported channel\n                            family.</dd>\n                           <dt>#OP_EINVAL</dt>\n                           <dd><code><a href=\"#op_seek_func\">seek()</a></code>\n                            was implemented and succeeded on this source, but\n                            <code><a href=\"#op_tell_func\">tell()</a></code>\n                            did not, or the starting position indicator was\n                            not equal to \\a _initial_bytes.</dd>\n                           <dt>#OP_ENOTFORMAT</dt>\n                           <dd>The stream contained a link that did not have\n                            any logical Opus streams in it.</dd>\n                           <dt>#OP_EBADHEADER</dt>\n                           <dd>A required header packet was not properly\n                            formatted, contained illegal values, or was missing\n                            altogether.</dd>\n                           <dt>#OP_EVERSION</dt>\n                           <dd>An ID header contained an unrecognized version\n                            number.</dd>\n                           <dt>#OP_EBADLINK</dt>\n                           <dd>We failed to find data we had seen before after\n                            seeking.</dd>\n                           <dt>#OP_EBADTIMESTAMP</dt>\n                           <dd>The first or last timestamp in a link failed\n                            basic validity checks.</dd>\n                         </dl>\n   \\return A freshly opened \\c OggOpusFile, or <code>NULL</code> on error.\n           <tt>libopusfile</tt> does <em>not</em> take ownership of the stream\n            if the call fails.\n           The calling application is responsible for closing the stream if\n            this call returns an error.*/\nOP_WARN_UNUSED_RESULT OggOpusFile *op_open_callbacks(void *_stream,\n const OpusFileCallbacks *_cb,const unsigned char *_initial_data,\n size_t _initial_bytes,int *_error) OP_ARG_NONNULL(2);\n\n/**Partially open a stream from the given file path.\n   \\see op_test_callbacks\n   \\param      _path  The path to the file to open.\n   \\param[out] _error Returns 0 on success, or a failure code on error.\n                      You may pass in <code>NULL</code> if you don't want the\n                       failure code.\n                      The failure code will be #OP_EFAULT if the file could not\n                       be opened, or one of the other failure codes from\n                       op_open_callbacks() otherwise.\n   \\return A partially opened \\c OggOpusFile, or <code>NULL</code> on error.*/\nOP_WARN_UNUSED_RESULT OggOpusFile *op_test_file(const char *_path,int *_error)\n OP_ARG_NONNULL(1);\n\n/**Partially open a stream from a memory buffer.\n   \\see op_test_callbacks\n   \\param      _data  The memory buffer to open.\n   \\param      _size  The number of bytes in the buffer.\n   \\param[out] _error Returns 0 on success, or a failure code on error.\n                      You may pass in <code>NULL</code> if you don't want the\n                       failure code.\n                      See op_open_callbacks() for a full list of failure codes.\n   \\return A partially opened \\c OggOpusFile, or <code>NULL</code> on error.*/\nOP_WARN_UNUSED_RESULT OggOpusFile *op_test_memory(const unsigned char *_data,\n size_t _size,int *_error);\n\n/**Partially open a stream from a URL.\n   This function behaves identically to op_test_url(), except that it\n    takes a va_list instead of a variable number of arguments.\n   It does not call the <code>va_end</code> macro, and because it invokes the\n    <code>va_arg</code> macro, the value of \\a _ap is undefined after the call.\n   \\note If you use this function, you must link against <tt>libopusurl</tt>.\n   \\see op_test_url\n   \\see op_test_callbacks\n   \\param         _url    The URL to open.\n                          Currently only the <file:>, <http:>, and <https:>\n                           schemes are supported.\n                          Both <http:> and <https:> may be disabled at compile\n                           time, in which case opening such URLs will always\n                           fail.\n                          Currently this only supports URIs.\n                          IRIs should be converted to UTF-8 and URL-escaped,\n                           with internationalized domain names encoded in\n                           punycode, before passing them to this function.\n   \\param[out]    _error  Returns 0 on success, or a failure code on error.\n                          You may pass in <code>NULL</code> if you don't want\n                           the failure code.\n                          See op_open_callbacks() for a full list of failure\n                           codes.\n   \\param[in,out] _ap     A list of the \\ref url_options \"optional flags\" to\n                           use.\n                          This is a variable-length list of options terminated\n                           with <code>NULL</code>.\n   \\return A partially opened \\c OggOpusFile, or <code>NULL</code> on error.*/\nOP_WARN_UNUSED_RESULT OggOpusFile *op_vtest_url(const char *_url,\n int *_error,va_list _ap) OP_ARG_NONNULL(1);\n\n/**Partially open a stream from a URL.\n   \\note If you use this function, you must link against <tt>libopusurl</tt>.\n   \\see op_test_callbacks\n   \\param      _url    The URL to open.\n                       Currently only the <file:>, <http:>, and <https:>\n                        schemes are supported.\n                       Both <http:> and <https:> may be disabled at compile\n                        time, in which case opening such URLs will always fail.\n                       Currently this only supports URIs.\n                       IRIs should be converted to UTF-8 and URL-escaped, with\n                        internationalized domain names encoded in punycode,\n                        before passing them to this function.\n   \\param[out] _error  Returns 0 on success, or a failure code on error.\n                       You may pass in <code>NULL</code> if you don't want the\n                        failure code.\n                       See op_open_callbacks() for a full list of failure\n                        codes.\n   \\param      ...     The \\ref url_options \"optional flags\" to use.\n                       This is a variable-length list of options terminated\n                        with <code>NULL</code>.\n   \\return A partially opened \\c OggOpusFile, or <code>NULL</code> on error.*/\nOP_WARN_UNUSED_RESULT OggOpusFile *op_test_url(const char *_url,\n int *_error,...) OP_ARG_NONNULL(1);\n\n/**Partially open a stream using the given set of callbacks to access it.\n   This tests for Opusness and loads the headers for the first link.\n   It does not seek (although it tests for seekability).\n   You can query a partially open stream for the few pieces of basic\n    information returned by op_serialno(), op_channel_count(), op_head(), and\n    op_tags() (but only for the first link).\n   You may also determine if it is seekable via a call to op_seekable().\n   You cannot read audio from the stream, seek, get the size or duration,\n    get information from links other than the first one, or even get the total\n    number of links until you finish opening the stream with op_test_open().\n   If you do not need to do any of these things, you can dispose of it with\n    op_free() instead.\n\n   This function is provided mostly to simplify porting existing code that used\n    <tt>libvorbisfile</tt>.\n   For new code, you are likely better off using op_test() instead, which\n    is less resource-intensive, requires less data to succeed, and imposes a\n    hard limit on the amount of data it examines (important for unseekable\n    streams, where all such data must be buffered until you are sure of the\n    stream type).\n   \\param _stream        The stream to read from (e.g., a <code>FILE *</code>).\n                         This value will be passed verbatim as the first\n                          argument to all of the callbacks.\n   \\param _cb            The callbacks with which to access the stream.\n                         <code><a href=\"#op_read_func\">read()</a></code> must\n                          be implemented.\n                         <code><a href=\"#op_seek_func\">seek()</a></code> and\n                          <code><a href=\"#op_tell_func\">tell()</a></code> may\n                          be <code>NULL</code>, or may always return -1 to\n                          indicate a stream is unseekable, but if\n                          <code><a href=\"#op_seek_func\">seek()</a></code> is\n                          implemented and succeeds on a particular stream, then\n                          <code><a href=\"#op_tell_func\">tell()</a></code> must\n                          also.\n                         <code><a href=\"#op_close_func\">close()</a></code> may\n                          be <code>NULL</code>, but if it is not, it will be\n                          called when the \\c OggOpusFile is destroyed by\n                          op_free().\n                         It will not be called if op_open_callbacks() fails\n                          with an error.\n   \\param _initial_data  An initial buffer of data from the start of the\n                          stream.\n                         Applications can read some number of bytes from the\n                          start of the stream to help identify this as an Opus\n                          stream, and then provide them here to allow the\n                          stream to be tested more thoroughly, even if it is\n                          unseekable.\n   \\param _initial_bytes The number of bytes in \\a _initial_data.\n                         If the stream is seekable, its current position (as\n                          reported by\n                          <code><a href=\"#opus_tell_func\">tell()</a></code>\n                          at the start of this function) must be equal to\n                          \\a _initial_bytes.\n                         Otherwise, seeking to absolute positions will\n                          generate inconsistent results.\n   \\param[out] _error    Returns 0 on success, or a failure code on error.\n                         You may pass in <code>NULL</code> if you don't want\n                          the failure code.\n                         See op_open_callbacks() for a full list of failure\n                          codes.\n   \\return A partially opened \\c OggOpusFile, or <code>NULL</code> on error.\n           <tt>libopusfile</tt> does <em>not</em> take ownership of the stream\n            if the call fails.\n           The calling application is responsible for closing the stream if\n            this call returns an error.*/\nOP_WARN_UNUSED_RESULT OggOpusFile *op_test_callbacks(void *_stream,\n const OpusFileCallbacks *_cb,const unsigned char *_initial_data,\n size_t _initial_bytes,int *_error) OP_ARG_NONNULL(2);\n\n/**Finish opening a stream partially opened with op_test_callbacks() or one of\n    the associated convenience functions.\n   If this function fails, you are still responsible for freeing the\n    \\c OggOpusFile with op_free().\n   \\param _of The \\c OggOpusFile to finish opening.\n   \\return 0 on success, or a negative value on error.\n   \\retval #OP_EREAD         An underlying read, seek, or tell operation failed\n                              when it should have succeeded.\n   \\retval #OP_EFAULT        There was a memory allocation failure, or an\n                              internal library error.\n   \\retval #OP_EIMPL         The stream used a feature that is not implemented,\n                              such as an unsupported channel family.\n   \\retval #OP_EINVAL        The stream was not partially opened with\n                              op_test_callbacks() or one of the associated\n                              convenience functions.\n   \\retval #OP_ENOTFORMAT    The stream contained a link that did not have any\n                              logical Opus streams in it.\n   \\retval #OP_EBADHEADER    A required header packet was not properly\n                              formatted, contained illegal values, or was\n                              missing altogether.\n   \\retval #OP_EVERSION      An ID header contained an unrecognized version\n                              number.\n   \\retval #OP_EBADLINK      We failed to find data we had seen before after\n                              seeking.\n   \\retval #OP_EBADTIMESTAMP The first or last timestamp in a link failed basic\n                              validity checks.*/\nint op_test_open(OggOpusFile *_of) OP_ARG_NONNULL(1);\n\n/**Release all memory used by an \\c OggOpusFile.\n   \\param _of The \\c OggOpusFile to free.*/\nvoid op_free(OggOpusFile *_of);\n\n/*@}*/\n/*@}*/\n\n/**\\defgroup stream_info Stream Information*/\n/*@{*/\n/**\\name Functions for obtaining information about streams\n\n   These functions allow you to get basic information about a stream, including\n    seekability, the number of links (for chained streams), plus the size,\n    duration, bitrate, header parameters, and meta information for each link\n    (or, where available, the stream as a whole).\n   Some of these (size, duration) are only available for seekable streams.\n   You can also query the current stream position, link, and playback time,\n    and instantaneous bitrate during playback.\n\n   Some of these functions may be used successfully on the partially open\n    streams returned by op_test_callbacks() or one of the associated\n    convenience functions.\n   Their documention will indicate so explicitly.*/\n/*@{*/\n\n/**Returns whether or not the stream being read is seekable.\n   This is true if\n   <ol>\n   <li>The <code><a href=\"#op_seek_func\">seek()</a></code> and\n    <code><a href=\"#op_tell_func\">tell()</a></code> callbacks are both\n    non-<code>NULL</code>,</li>\n   <li>The <code><a href=\"#op_seek_func\">seek()</a></code> callback was\n    successfully executed at least once, and</li>\n   <li>The <code><a href=\"#op_tell_func\">tell()</a></code> callback was\n    successfully able to report the position indicator afterwards.</li>\n   </ol>\n   This function may be called on partially-opened streams.\n   \\param _of The \\c OggOpusFile whose seekable status is to be returned.\n   \\return A non-zero value if seekable, and 0 if unseekable.*/\nint op_seekable(const OggOpusFile *_of) OP_ARG_NONNULL(1);\n\n/**Returns the number of links in this chained stream.\n   This function may be called on partially-opened streams, but it will always\n    return 1.\n   The actual number of links is not known until the stream is fully opened.\n   \\param _of The \\c OggOpusFile from which to retrieve the link count.\n   \\return For fully-open seekable streams, this returns the total number of\n            links in the whole stream, which will be at least 1.\n           For partially-open or unseekable streams, this always returns 1.*/\nint op_link_count(const OggOpusFile *_of) OP_ARG_NONNULL(1);\n\n/**Get the serial number of the given link in a (possibly-chained) Ogg Opus\n    stream.\n   This function may be called on partially-opened streams, but it will always\n    return the serial number of the Opus stream in the first link.\n   \\param _of The \\c OggOpusFile from which to retrieve the serial number.\n   \\param _li The index of the link whose serial number should be retrieved.\n              Use a negative number to get the serial number of the current\n               link.\n   \\return The serial number of the given link.\n           If \\a _li is greater than the total number of links, this returns\n            the serial number of the last link.\n           If the stream is not seekable, this always returns the serial number\n            of the current link.*/\nopus_uint32 op_serialno(const OggOpusFile *_of,int _li) OP_ARG_NONNULL(1);\n\n/**Get the channel count of the given link in a (possibly-chained) Ogg Opus\n    stream.\n   This is equivalent to <code>op_head(_of,_li)->channel_count</code>, but\n    is provided for convenience.\n   This function may be called on partially-opened streams, but it will always\n    return the channel count of the Opus stream in the first link.\n   \\param _of The \\c OggOpusFile from which to retrieve the channel count.\n   \\param _li The index of the link whose channel count should be retrieved.\n              Use a negative number to get the channel count of the current\n               link.\n   \\return The channel count of the given link.\n           If \\a _li is greater than the total number of links, this returns\n            the channel count of the last link.\n           If the stream is not seekable, this always returns the channel count\n            of the current link.*/\nint op_channel_count(const OggOpusFile *_of,int _li) OP_ARG_NONNULL(1);\n\n/**Get the total (compressed) size of the stream, or of an individual link in\n    a (possibly-chained) Ogg Opus stream, including all headers and Ogg muxing\n    overhead.\n   \\warning If the Opus stream (or link) is concurrently multiplexed with other\n    logical streams (e.g., video), this returns the size of the entire stream\n    (or link), not just the number of bytes in the first logical Opus stream.\n   Returning the latter would require scanning the entire file.\n   \\param _of The \\c OggOpusFile from which to retrieve the compressed size.\n   \\param _li The index of the link whose compressed size should be computed.\n              Use a negative number to get the compressed size of the entire\n               stream.\n   \\return The compressed size of the entire stream if \\a _li is negative, the\n            compressed size of link \\a _li if it is non-negative, or a negative\n            value on error.\n           The compressed size of the entire stream may be smaller than that\n            of the underlying stream if trailing garbage was detected in the\n            file.\n   \\retval #OP_EINVAL The stream is not seekable (so we can't know the length),\n                       \\a _li wasn't less than the total number of links in\n                       the stream, or the stream was only partially open.*/\nopus_int64 op_raw_total(const OggOpusFile *_of,int _li) OP_ARG_NONNULL(1);\n\n/**Get the total PCM length (number of samples at 48 kHz) of the stream, or of\n    an individual link in a (possibly-chained) Ogg Opus stream.\n   Users looking for <code>op_time_total()</code> should use op_pcm_total()\n    instead.\n   Because timestamps in Opus are fixed at 48 kHz, there is no need for a\n    separate function to convert this to seconds (and leaving it out avoids\n    introducing floating point to the API, for those that wish to avoid it).\n   \\param _of The \\c OggOpusFile from which to retrieve the PCM offset.\n   \\param _li The index of the link whose PCM length should be computed.\n              Use a negative number to get the PCM length of the entire stream.\n   \\return The PCM length of the entire stream if \\a _li is negative, the PCM\n            length of link \\a _li if it is non-negative, or a negative value on\n            error.\n   \\retval #OP_EINVAL The stream is not seekable (so we can't know the length),\n                       \\a _li wasn't less than the total number of links in\n                       the stream, or the stream was only partially open.*/\nogg_int64_t op_pcm_total(const OggOpusFile *_of,int _li) OP_ARG_NONNULL(1);\n\n/**Get the ID header information for the given link in a (possibly chained) Ogg\n    Opus stream.\n   This function may be called on partially-opened streams, but it will always\n    return the ID header information of the Opus stream in the first link.\n   \\param _of The \\c OggOpusFile from which to retrieve the ID header\n               information.\n   \\param _li The index of the link whose ID header information should be\n               retrieved.\n              Use a negative number to get the ID header information of the\n               current link.\n              For an unseekable stream, \\a _li is ignored, and the ID header\n               information for the current link is always returned, if\n               available.\n   \\return The contents of the ID header for the given link.*/\nconst OpusHead *op_head(const OggOpusFile *_of,int _li) OP_ARG_NONNULL(1);\n\n/**Get the comment header information for the given link in a (possibly\n    chained) Ogg Opus stream.\n   This function may be called on partially-opened streams, but it will always\n    return the tags from the Opus stream in the first link.\n   \\param _of The \\c OggOpusFile from which to retrieve the comment header\n               information.\n   \\param _li The index of the link whose comment header information should be\n               retrieved.\n              Use a negative number to get the comment header information of\n               the current link.\n              For an unseekable stream, \\a _li is ignored, and the comment\n               header information for the current link is always returned, if\n               available.\n   \\return The contents of the comment header for the given link, or\n            <code>NULL</code> if this is an unseekable stream that encountered\n            an invalid link.*/\nconst OpusTags *op_tags(const OggOpusFile *_of,int _li) OP_ARG_NONNULL(1);\n\n/**Retrieve the index of the current link.\n   This is the link that produced the data most recently read by\n    op_read_float() or its associated functions, or, after a seek, the link\n    that the seek target landed in.\n   Reading more data may advance the link index (even on the first read after a\n    seek).\n   \\param _of The \\c OggOpusFile from which to retrieve the current link index.\n   \\return The index of the current link on success, or a negative value on\n            failure.\n           For seekable streams, this is a number between 0 (inclusive) and the\n            value returned by op_link_count() (exclusive).\n           For unseekable streams, this value starts at 0 and increments by one\n            each time a new link is encountered (even though op_link_count()\n            always returns 1).\n   \\retval #OP_EINVAL The stream was only partially open.*/\nint op_current_link(const OggOpusFile *_of) OP_ARG_NONNULL(1);\n\n/**Computes the bitrate of the stream, or of an individual link in a\n    (possibly-chained) Ogg Opus stream.\n   The stream must be seekable to compute the bitrate.\n   For unseekable streams, use op_bitrate_instant() to get periodic estimates.\n   \\warning If the Opus stream (or link) is concurrently multiplexed with other\n    logical streams (e.g., video), this uses the size of the entire stream (or\n    link) to compute the bitrate, not just the number of bytes in the first\n    logical Opus stream.\n   Returning the latter requires scanning the entire file, but this may be done\n    by decoding the whole file and calling op_bitrate_instant() once at the\n    end.\n   Install a trivial decoding callback with op_set_decode_callback() if you\n    wish to skip actual decoding during this process.\n   \\param _of The \\c OggOpusFile from which to retrieve the bitrate.\n   \\param _li The index of the link whose bitrate should be computed.\n              Use a negative number to get the bitrate of the whole stream.\n   \\return The bitrate on success, or a negative value on error.\n   \\retval #OP_EINVAL The stream was only partially open, the stream was not\n                       seekable, or \\a _li was larger than the number of\n                       links.*/\nopus_int32 op_bitrate(const OggOpusFile *_of,int _li) OP_ARG_NONNULL(1);\n\n/**Compute the instantaneous bitrate, measured as the ratio of bits to playable\n    samples decoded since a) the last call to op_bitrate_instant(), b) the last\n    seek, or c) the start of playback, whichever was most recent.\n   This will spike somewhat after a seek or at the start/end of a chain\n    boundary, as pre-skip, pre-roll, and end-trimming causes samples to be\n    decoded but not played.\n   \\param _of The \\c OggOpusFile from which to retrieve the bitrate.\n   \\return The bitrate, in bits per second, or a negative value on error.\n   \\retval #OP_FALSE  No data has been decoded since any of the events\n                       described above.\n   \\retval #OP_EINVAL The stream was only partially open.*/\nopus_int32 op_bitrate_instant(OggOpusFile *_of) OP_ARG_NONNULL(1);\n\n/**Obtain the current value of the position indicator for \\a _of.\n   \\param _of The \\c OggOpusFile from which to retrieve the position indicator.\n   \\return The byte position that is currently being read from.\n   \\retval #OP_EINVAL The stream was only partially open.*/\nopus_int64 op_raw_tell(const OggOpusFile *_of) OP_ARG_NONNULL(1);\n\n/**Obtain the PCM offset of the next sample to be read.\n   If the stream is not properly timestamped, this might not increment by the\n    proper amount between reads, or even return monotonically increasing\n    values.\n   \\param _of The \\c OggOpusFile from which to retrieve the PCM offset.\n   \\return The PCM offset of the next sample to be read.\n   \\retval #OP_EINVAL The stream was only partially open.*/\nogg_int64_t op_pcm_tell(const OggOpusFile *_of) OP_ARG_NONNULL(1);\n\n/*@}*/\n/*@}*/\n\n/**\\defgroup stream_seeking Seeking*/\n/*@{*/\n/**\\name Functions for seeking in Opus streams\n\n   These functions let you seek in Opus streams, if the underlying stream\n    support it.\n   Seeking is implemented for all built-in stream I/O routines, though some\n    individual streams may not be seekable (pipes, live HTTP streams, or HTTP\n    streams from a server that does not support <code>Range</code> requests).\n\n   op_raw_seek() is the fastest: it is guaranteed to perform at most one\n    physical seek, but, since the target is a byte position, makes no guarantee\n    how close to a given time it will come.\n   op_pcm_seek() provides sample-accurate seeking.\n   The number of physical seeks it requires is still quite small (often 1 or\n    2, even in highly variable bitrate streams).\n\n   Seeking in Opus requires decoding some pre-roll amount before playback to\n    allow the internal state to converge (as if recovering from packet loss).\n   This is handled internally by <tt>libopusfile</tt>, but means there is\n    little extra overhead for decoding up to the exact position requested\n    (since it must decode some amount of audio anyway).\n   It also means that decoding after seeking may not return exactly the same\n    values as would be obtained by decoding the stream straight through.\n   However, such differences are expected to be smaller than the loss\n    introduced by Opus's lossy compression.*/\n/*@{*/\n\n/**Seek to a byte offset relative to the <b>compressed</b> data.\n   This also scans packets to update the PCM cursor.\n   It will cross a logical bitstream boundary, but only if it can't get any\n    packets out of the tail of the link to which it seeks.\n   \\param _of          The \\c OggOpusFile in which to seek.\n   \\param _byte_offset The byte position to seek to.\n                       This must be between 0 and #op_raw_total(\\a _of,\\c -1)\n                        (inclusive).\n   \\return 0 on success, or a negative error code on failure.\n   \\retval #OP_EREAD    The underlying seek operation failed.\n   \\retval #OP_EINVAL   The stream was only partially open, or the target was\n                         outside the valid range for the stream.\n   \\retval #OP_ENOSEEK  This stream is not seekable.\n   \\retval #OP_EBADLINK Failed to initialize a decoder for a stream for an\n                         unknown reason.*/\nint op_raw_seek(OggOpusFile *_of,opus_int64 _byte_offset) OP_ARG_NONNULL(1);\n\n/**Seek to the specified PCM offset, such that decoding will begin at exactly\n    the requested position.\n   \\param _of         The \\c OggOpusFile in which to seek.\n   \\param _pcm_offset The PCM offset to seek to.\n                      This is in samples at 48 kHz relative to the start of the\n                       stream.\n   \\return 0 on success, or a negative value on error.\n   \\retval #OP_EREAD    An underlying read or seek operation failed.\n   \\retval #OP_EINVAL   The stream was only partially open, or the target was\n                         outside the valid range for the stream.\n   \\retval #OP_ENOSEEK  This stream is not seekable.\n   \\retval #OP_EBADLINK We failed to find data we had seen before, or the\n                         bitstream structure was sufficiently malformed that\n                         seeking to the target destination was impossible.*/\nint op_pcm_seek(OggOpusFile *_of,ogg_int64_t _pcm_offset) OP_ARG_NONNULL(1);\n\n/*@}*/\n/*@}*/\n\n/**\\defgroup stream_decoding Decoding*/\n/*@{*/\n/**\\name Functions for decoding audio data\n\n   These functions retrieve actual decoded audio data from the stream.\n   The general functions, op_read() and op_read_float() return 16-bit or\n    floating-point output, both using native endian ordering.\n   The number of channels returned can change from link to link in a chained\n    stream.\n   There are special functions, op_read_stereo() and op_read_float_stereo(),\n    which always output two channels, to simplify applications which do not\n    wish to handle multichannel audio.\n   These downmix multichannel files to two channels, so they can always return\n    samples in the same format for every link in a chained file.\n\n   If the rest of your audio processing chain can handle floating point, the\n    floating-point routines should be preferred, as they prevent clipping and\n    other issues which might be avoided entirely if, e.g., you scale down the\n    volume at some other stage.\n   However, if you intend to consume 16-bit samples directly, the conversion in\n    <tt>libopusfile</tt> provides noise-shaping dithering and, if compiled\n    against <tt>libopus</tt>&nbsp;1.1 or later, soft-clipping prevention.\n\n   <tt>libopusfile</tt> can also be configured at compile time to use the\n    fixed-point <tt>libopus</tt> API.\n   If so, <tt>libopusfile</tt>'s floating-point API may also be disabled.\n   In that configuration, nothing in <tt>libopusfile</tt> will use any\n    floating-point operations, to simplify support on devices without an\n    adequate FPU.\n\n   \\warning HTTPS streams may be be vulnerable to truncation attacks if you do\n    not check the error return code from op_read_float() or its associated\n    functions.\n   If the remote peer does not close the connection gracefully (with a TLS\n    \"close notify\" message), these functions will return #OP_EREAD instead of 0\n    when they reach the end of the file.\n   If you are reading from an <https:> URL (particularly if seeking is not\n    supported), you should make sure to check for this error and warn the user\n    appropriately.*/\n/*@{*/\n\n/**Indicates that the decoding callback should produce signed 16-bit\n    native-endian output samples.*/\n#define OP_DEC_FORMAT_SHORT (7008)\n/**Indicates that the decoding callback should produce 32-bit native-endian\n    float samples.*/\n#define OP_DEC_FORMAT_FLOAT (7040)\n\n/**Indicates that the decoding callback did not decode anything, and that\n    <tt>libopusfile</tt> should decode normally instead.*/\n#define OP_DEC_USE_DEFAULT  (6720)\n\n/**Called to decode an Opus packet.\n   This should invoke the functional equivalent of opus_multistream_decode() or\n    opus_multistream_decode_float(), except that it returns 0 on success\n    instead of the number of decoded samples (which is known a priori).\n   \\param _ctx       The application-provided callback context.\n   \\param _decoder   The decoder to use to decode the packet.\n   \\param[out] _pcm  The buffer to decode into.\n                     This will always have enough room for \\a _nchannels of\n                      \\a _nsamples samples, which should be placed into this\n                      buffer interleaved.\n   \\param _op        The packet to decode.\n                     This will always have its granule position set to a valid\n                      value.\n   \\param _nsamples  The number of samples expected from the packet.\n   \\param _nchannels The number of channels expected from the packet.\n   \\param _format    The desired sample output format.\n                     This is either #OP_DEC_FORMAT_SHORT or\n                      #OP_DEC_FORMAT_FLOAT.\n   \\param _li        The index of the link from which this packet was decoded.\n   \\return A non-negative value on success, or a negative value on error.\n           Any error codes should be the same as those returned by\n            opus_multistream_decode() or opus_multistream_decode_float().\n           Success codes are as follows:\n   \\retval 0                   Decoding was successful.\n                               The application has filled the buffer with\n                                exactly <code>\\a _nsamples*\\a\n                                _nchannels</code> samples in the requested\n                                format.\n   \\retval #OP_DEC_USE_DEFAULT No decoding was done.\n                               <tt>libopusfile</tt> should do the decoding\n                                by itself instead.*/\ntypedef int (*op_decode_cb_func)(void *_ctx,OpusMSDecoder *_decoder,void *_pcm,\n const ogg_packet *_op,int _nsamples,int _nchannels,int _format,int _li);\n\n/**Sets the packet decode callback function.\n   If set, this is called once for each packet that needs to be decoded.\n   This can be used by advanced applications to do additional processing on the\n    compressed or uncompressed data.\n   For example, an application might save the final entropy coder state for\n    debugging and testing purposes, or it might apply additional filters\n    before the downmixing, dithering, or soft-clipping performed by\n    <tt>libopusfile</tt>, so long as these filters do not introduce any\n    latency.\n\n   A call to this function is no guarantee that the audio will eventually be\n    delivered to the application.\n   <tt>libopusfile</tt> may discard some or all of the decoded audio data\n    (i.e., at the beginning or end of a link, or after a seek), however the\n    callback is still required to provide all of it.\n   \\param _of        The \\c OggOpusFile on which to set the decode callback.\n   \\param _decode_cb The callback function to call.\n                     This may be <code>NULL</code> to disable calling the\n                      callback.\n   \\param _ctx       The application-provided context pointer to pass to the\n                      callback on each call.*/\nvoid op_set_decode_callback(OggOpusFile *_of,\n op_decode_cb_func _decode_cb,void *_ctx) OP_ARG_NONNULL(1);\n\n/**Gain offset type that indicates that the provided offset is relative to the\n    header gain.\n   This is the default.*/\n#define OP_HEADER_GAIN   (0)\n\n/**Gain offset type that indicates that the provided offset is relative to the\n    R128_ALBUM_GAIN value (if any), in addition to the header gain.*/\n#define OP_ALBUM_GAIN    (3007)\n\n/**Gain offset type that indicates that the provided offset is relative to the\n    R128_TRACK_GAIN value (if any), in addition to the header gain.*/\n#define OP_TRACK_GAIN    (3008)\n\n/**Gain offset type that indicates that the provided offset should be used as\n    the gain directly, without applying any the header or track gains.*/\n#define OP_ABSOLUTE_GAIN (3009)\n\n/**Sets the gain to be used for decoded output.\n   By default, the gain in the header is applied with no additional offset.\n   The total gain (including header gain and/or track gain, if applicable, and\n    this offset), will be clamped to [-32768,32767]/256 dB.\n   This is more than enough to saturate or underflow 16-bit PCM.\n   \\note The new gain will not be applied to any already buffered, decoded\n    output.\n   This means you cannot change it sample-by-sample, as at best it will be\n    updated packet-by-packet.\n   It is meant for setting a target volume level, rather than applying smooth\n    fades, etc.\n   \\param _of             The \\c OggOpusFile on which to set the gain offset.\n   \\param _gain_type      One of #OP_HEADER_GAIN, #OP_ALBUM_GAIN,\n                           #OP_TRACK_GAIN, or #OP_ABSOLUTE_GAIN.\n   \\param _gain_offset_q8 The gain offset to apply, in 1/256ths of a dB.\n   \\return 0 on success or a negative value on error.\n   \\retval #OP_EINVAL The \\a _gain_type was unrecognized.*/\nint op_set_gain_offset(OggOpusFile *_of,\n int _gain_type,opus_int32 _gain_offset_q8) OP_ARG_NONNULL(1);\n\n/**Sets whether or not dithering is enabled for 16-bit decoding.\n   By default, when <tt>libopusfile</tt> is compiled to use floating-point\n    internally, calling op_read() or op_read_stereo() will first decode to\n    float, and then convert to fixed-point using noise-shaping dithering.\n   This flag can be used to disable that dithering.\n   When the application uses op_read_float() or op_read_float_stereo(), or when\n    the library has been compiled to decode directly to fixed point, this flag\n    has no effect.\n   \\param _of      The \\c OggOpusFile on which to enable or disable dithering.\n   \\param _enabled A non-zero value to enable dithering, or 0 to disable it.*/\nvoid op_set_dither_enabled(OggOpusFile *_of,int _enabled) OP_ARG_NONNULL(1);\n\n/**Reads more samples from the stream.\n   \\note Although \\a _buf_size must indicate the total number of values that\n    can be stored in \\a _pcm, the return value is the number of samples\n    <em>per channel</em>.\n   This is done because\n   <ol>\n   <li>The channel count cannot be known a priori (reading more samples might\n        advance us into the next link, with a different channel count), so\n        \\a _buf_size cannot also be in units of samples per channel,</li>\n   <li>Returning the samples per channel matches the <code>libopus</code> API\n        as closely as we're able,</li>\n   <li>Returning the total number of values instead of samples per channel\n        would mean the caller would need a division to compute the samples per\n        channel, and might worry about the possibility of getting back samples\n        for some channels and not others, and</li>\n   <li>This approach is relatively fool-proof: if an application passes too\n        small a value to \\a _buf_size, they will simply get fewer samples back,\n        and if they assume the return value is the total number of values, then\n        they will simply read too few (rather than reading too many and going\n        off the end of the buffer).</li>\n   </ol>\n   \\param      _of       The \\c OggOpusFile from which to read.\n   \\param[out] _pcm      A buffer in which to store the output PCM samples, as\n                          signed native-endian 16-bit values at 48&nbsp;kHz\n                          with a nominal range of <code>[-32768,32767)</code>.\n                         Multiple channels are interleaved using the\n                          <a href=\"http://www.xiph.org/vorbis/doc/Vorbis_I_spec.html#x1-800004.3.9\">Vorbis\n                          channel ordering</a>.\n                         This must have room for at least \\a _buf_size values.\n   \\param      _buf_size The number of values that can be stored in \\a _pcm.\n                         It is recommended that this be large enough for at\n                          least 120 ms of data at 48 kHz per channel (5760\n                          values per channel).\n                         Smaller buffers will simply return less data, possibly\n                          consuming more memory to buffer the data internally.\n                         <tt>libopusfile</tt> may return less data than\n                          requested.\n                         If so, there is no guarantee that the remaining data\n                          in \\a _pcm will be unmodified.\n   \\param[out] _li       The index of the link this data was decoded from.\n                         You may pass <code>NULL</code> if you do not need this\n                          information.\n                         If this function fails (returning a negative value),\n                          this parameter is left unset.\n   \\return The number of samples read per channel on success, or a negative\n            value on failure.\n           The channel count can be retrieved on success by calling\n            <code>op_head(_of,*_li)</code>.\n           The number of samples returned may be 0 if the buffer was too small\n            to store even a single sample for all channels, or if end-of-file\n            was reached.\n           The list of possible failure codes follows.\n           Most of them can only be returned by unseekable, chained streams\n            that encounter a new link.\n   \\retval #OP_HOLE          There was a hole in the data, and some samples\n                              may have been skipped.\n                             Call this function again to continue decoding\n                              past the hole.\n   \\retval #OP_EREAD         An underlying read operation failed.\n                             This may signal a truncation attack from an\n                              <https:> source.\n   \\retval #OP_EFAULT        An internal memory allocation failed.\n   \\retval #OP_EIMPL         An unseekable stream encountered a new link that\n                              used a feature that is not implemented, such as\n                              an unsupported channel family.\n   \\retval #OP_EINVAL        The stream was only partially open.\n   \\retval #OP_ENOTFORMAT    An unseekable stream encountered a new link that\n                              did not have any logical Opus streams in it.\n   \\retval #OP_EBADHEADER    An unseekable stream encountered a new link with a\n                              required header packet that was not properly\n                              formatted, contained illegal values, or was\n                              missing altogether.\n   \\retval #OP_EVERSION      An unseekable stream encountered a new link with\n                              an ID header that contained an unrecognized\n                              version number.\n   \\retval #OP_EBADPACKET    Failed to properly decode the next packet.\n   \\retval #OP_EBADLINK      We failed to find data we had seen before.\n   \\retval #OP_EBADTIMESTAMP An unseekable stream encountered a new link with\n                              a starting timestamp that failed basic validity\n                              checks.*/\nOP_WARN_UNUSED_RESULT int op_read(OggOpusFile *_of,\n opus_int16 *_pcm,int _buf_size,int *_li) OP_ARG_NONNULL(1);\n\n/**Reads more samples from the stream.\n   \\note Although \\a _buf_size must indicate the total number of values that\n    can be stored in \\a _pcm, the return value is the number of samples\n    <em>per channel</em>.\n   <ol>\n   <li>The channel count cannot be known a priori (reading more samples might\n        advance us into the next link, with a different channel count), so\n        \\a _buf_size cannot also be in units of samples per channel,</li>\n   <li>Returning the samples per channel matches the <code>libopus</code> API\n        as closely as we're able,</li>\n   <li>Returning the total number of values instead of samples per channel\n        would mean the caller would need a division to compute the samples per\n        channel, and might worry about the possibility of getting back samples\n        for some channels and not others, and</li>\n   <li>This approach is relatively fool-proof: if an application passes too\n        small a value to \\a _buf_size, they will simply get fewer samples back,\n        and if they assume the return value is the total number of values, then\n        they will simply read too few (rather than reading too many and going\n        off the end of the buffer).</li>\n   </ol>\n   \\param      _of       The \\c OggOpusFile from which to read.\n   \\param[out] _pcm      A buffer in which to store the output PCM samples as\n                          signed floats at 48&nbsp;kHz with a nominal range of\n                          <code>[-1.0,1.0]</code>.\n                         Multiple channels are interleaved using the\n                          <a href=\"http://www.xiph.org/vorbis/doc/Vorbis_I_spec.html#x1-800004.3.9\">Vorbis\n                          channel ordering</a>.\n                         This must have room for at least \\a _buf_size floats.\n   \\param      _buf_size The number of floats that can be stored in \\a _pcm.\n                         It is recommended that this be large enough for at\n                          least 120 ms of data at 48 kHz per channel (5760\n                          samples per channel).\n                         Smaller buffers will simply return less data, possibly\n                          consuming more memory to buffer the data internally.\n                         If less than \\a _buf_size values are returned,\n                          <tt>libopusfile</tt> makes no guarantee that the\n                          remaining data in \\a _pcm will be unmodified.\n   \\param[out] _li       The index of the link this data was decoded from.\n                         You may pass <code>NULL</code> if you do not need this\n                          information.\n                         If this function fails (returning a negative value),\n                          this parameter is left unset.\n   \\return The number of samples read per channel on success, or a negative\n            value on failure.\n           The channel count can be retrieved on success by calling\n            <code>op_head(_of,*_li)</code>.\n           The number of samples returned may be 0 if the buffer was too small\n            to store even a single sample for all channels, or if end-of-file\n            was reached.\n           The list of possible failure codes follows.\n           Most of them can only be returned by unseekable, chained streams\n            that encounter a new link.\n   \\retval #OP_HOLE          There was a hole in the data, and some samples\n                              may have been skipped.\n                             Call this function again to continue decoding\n                              past the hole.\n   \\retval #OP_EREAD         An underlying read operation failed.\n                             This may signal a truncation attack from an\n                              <https:> source.\n   \\retval #OP_EFAULT        An internal memory allocation failed.\n   \\retval #OP_EIMPL         An unseekable stream encountered a new link that\n                              used a feature that is not implemented, such as\n                              an unsupported channel family.\n   \\retval #OP_EINVAL        The stream was only partially open.\n   \\retval #OP_ENOTFORMAT    An unseekable stream encountered a new link that\n                              did not have any logical Opus streams in it.\n   \\retval #OP_EBADHEADER    An unseekable stream encountered a new link with a\n                              required header packet that was not properly\n                              formatted, contained illegal values, or was\n                              missing altogether.\n   \\retval #OP_EVERSION      An unseekable stream encountered a new link with\n                              an ID header that contained an unrecognized\n                              version number.\n   \\retval #OP_EBADPACKET    Failed to properly decode the next packet.\n   \\retval #OP_EBADLINK      We failed to find data we had seen before.\n   \\retval #OP_EBADTIMESTAMP An unseekable stream encountered a new link with\n                              a starting timestamp that failed basic validity\n                              checks.*/\nOP_WARN_UNUSED_RESULT int op_read_float(OggOpusFile *_of,\n float *_pcm,int _buf_size,int *_li) OP_ARG_NONNULL(1);\n\n/**Reads more samples from the stream and downmixes to stereo, if necessary.\n   This function is intended for simple players that want a uniform output\n    format, even if the channel count changes between links in a chained\n    stream.\n   \\note \\a _buf_size indicates the total number of values that can be stored\n    in \\a _pcm, while the return value is the number of samples <em>per\n    channel</em>, even though the channel count is known, for consistency with\n    op_read().\n   \\param      _of       The \\c OggOpusFile from which to read.\n   \\param[out] _pcm      A buffer in which to store the output PCM samples, as\n                          signed native-endian 16-bit values at 48&nbsp;kHz\n                          with a nominal range of <code>[-32768,32767)</code>.\n                         The left and right channels are interleaved in the\n                          buffer.\n                         This must have room for at least \\a _buf_size values.\n   \\param      _buf_size The number of values that can be stored in \\a _pcm.\n                         It is recommended that this be large enough for at\n                          least 120 ms of data at 48 kHz per channel (11520\n                          values total).\n                         Smaller buffers will simply return less data, possibly\n                          consuming more memory to buffer the data internally.\n                         If less than \\a _buf_size values are returned,\n                          <tt>libopusfile</tt> makes no guarantee that the\n                          remaining data in \\a _pcm will be unmodified.\n   \\return The number of samples read per channel on success, or a negative\n            value on failure.\n           The number of samples returned may be 0 if the buffer was too small\n            to store even a single sample for both channels, or if end-of-file\n            was reached.\n           The list of possible failure codes follows.\n           Most of them can only be returned by unseekable, chained streams\n            that encounter a new link.\n   \\retval #OP_HOLE          There was a hole in the data, and some samples\n                              may have been skipped.\n                             Call this function again to continue decoding\n                              past the hole.\n   \\retval #OP_EREAD         An underlying read operation failed.\n                             This may signal a truncation attack from an\n                              <https:> source.\n   \\retval #OP_EFAULT        An internal memory allocation failed.\n   \\retval #OP_EIMPL         An unseekable stream encountered a new link that\n                              used a feature that is not implemented, such as\n                              an unsupported channel family.\n   \\retval #OP_EINVAL        The stream was only partially open.\n   \\retval #OP_ENOTFORMAT    An unseekable stream encountered a new link that\n                              did not have any logical Opus streams in it.\n   \\retval #OP_EBADHEADER    An unseekable stream encountered a new link with a\n                              required header packet that was not properly\n                              formatted, contained illegal values, or was\n                              missing altogether.\n   \\retval #OP_EVERSION      An unseekable stream encountered a new link with\n                              an ID header that contained an unrecognized\n                              version number.\n   \\retval #OP_EBADPACKET    Failed to properly decode the next packet.\n   \\retval #OP_EBADLINK      We failed to find data we had seen before.\n   \\retval #OP_EBADTIMESTAMP An unseekable stream encountered a new link with\n                              a starting timestamp that failed basic validity\n                              checks.*/\nOP_WARN_UNUSED_RESULT int op_read_stereo(OggOpusFile *_of,\n opus_int16 *_pcm,int _buf_size) OP_ARG_NONNULL(1);\n\n/**Reads more samples from the stream and downmixes to stereo, if necessary.\n   This function is intended for simple players that want a uniform output\n    format, even if the channel count changes between links in a chained\n    stream.\n   \\note \\a _buf_size indicates the total number of values that can be stored\n    in \\a _pcm, while the return value is the number of samples <em>per\n    channel</em>, even though the channel count is known, for consistency with\n    op_read_float().\n   \\param      _of       The \\c OggOpusFile from which to read.\n   \\param[out] _pcm      A buffer in which to store the output PCM samples, as\n                          signed floats at 48&nbsp;kHz with a nominal range of\n                          <code>[-1.0,1.0]</code>.\n                         The left and right channels are interleaved in the\n                          buffer.\n                         This must have room for at least \\a _buf_size values.\n   \\param      _buf_size The number of values that can be stored in \\a _pcm.\n                         It is recommended that this be large enough for at\n                          least 120 ms of data at 48 kHz per channel (11520\n                          values total).\n                         Smaller buffers will simply return less data, possibly\n                          consuming more memory to buffer the data internally.\n                         If less than \\a _buf_size values are returned,\n                          <tt>libopusfile</tt> makes no guarantee that the\n                          remaining data in \\a _pcm will be unmodified.\n   \\return The number of samples read per channel on success, or a negative\n            value on failure.\n           The number of samples returned may be 0 if the buffer was too small\n            to store even a single sample for both channels, or if end-of-file\n            was reached.\n           The list of possible failure codes follows.\n           Most of them can only be returned by unseekable, chained streams\n            that encounter a new link.\n   \\retval #OP_HOLE          There was a hole in the data, and some samples\n                              may have been skipped.\n                             Call this function again to continue decoding\n                              past the hole.\n   \\retval #OP_EREAD         An underlying read operation failed.\n                             This may signal a truncation attack from an\n                              <https:> source.\n   \\retval #OP_EFAULT        An internal memory allocation failed.\n   \\retval #OP_EIMPL         An unseekable stream encountered a new link that\n                              used a feature that is not implemented, such as\n                              an unsupported channel family.\n   \\retval #OP_EINVAL        The stream was only partially open.\n   \\retval #OP_ENOTFORMAT    An unseekable stream encountered a new link that\n                              that did not have any logical Opus streams in it.\n   \\retval #OP_EBADHEADER    An unseekable stream encountered a new link with a\n                              required header packet that was not properly\n                              formatted, contained illegal values, or was\n                              missing altogether.\n   \\retval #OP_EVERSION      An unseekable stream encountered a new link with\n                              an ID header that contained an unrecognized\n                              version number.\n   \\retval #OP_EBADPACKET    Failed to properly decode the next packet.\n   \\retval #OP_EBADLINK      We failed to find data we had seen before.\n   \\retval #OP_EBADTIMESTAMP An unseekable stream encountered a new link with\n                              a starting timestamp that failed basic validity\n                              checks.*/\nOP_WARN_UNUSED_RESULT int op_read_float_stereo(OggOpusFile *_of,\n float *_pcm,int _buf_size) OP_ARG_NONNULL(1);\n\n/*@}*/\n/*@}*/\n\n# if OP_GNUC_PREREQ(4,0)\n#  pragma GCC visibility pop\n# endif\n\n# if defined(__cplusplus)\n}\n# endif\n\n#endif\n"
  },
  {
    "path": "ThirdParty/opusfile-0.9/src/http.c",
    "content": "/********************************************************************\n *                                                                  *\n * THIS FILE IS PART OF THE libopusfile SOFTWARE CODEC SOURCE CODE. *\n * USE, DISTRIBUTION AND REPRODUCTION OF THIS LIBRARY SOURCE IS     *\n * GOVERNED BY A BSD-STYLE SOURCE LICENSE INCLUDED WITH THIS SOURCE *\n * IN 'COPYING'. PLEASE READ THESE TERMS BEFORE DISTRIBUTING.       *\n *                                                                  *\n * THE libopusfile SOURCE CODE IS (C) COPYRIGHT 2012                *\n * by the Xiph.Org Foundation and contributors http://www.xiph.org/ *\n *                                                                  *\n ********************************************************************/\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include \"internal.h\"\n#include <ctype.h>\n#include <errno.h>\n#include <limits.h>\n#include <string.h>\n\n/*RFCs referenced in this file:\n  RFC  761: DOD Standard Transmission Control Protocol\n  RFC 1535: A Security Problem and Proposed Correction With Widely Deployed DNS\n   Software\n  RFC 1738: Uniform Resource Locators (URL)\n  RFC 1945: Hypertext Transfer Protocol -- HTTP/1.0\n  RFC 2068: Hypertext Transfer Protocol -- HTTP/1.1\n  RFC 2145: Use and Interpretation of HTTP Version Numbers\n  RFC 2246: The TLS Protocol Version 1.0\n  RFC 2459: Internet X.509 Public Key Infrastructure Certificate and\n   Certificate Revocation List (CRL) Profile\n  RFC 2616: Hypertext Transfer Protocol -- HTTP/1.1\n  RFC 2617: HTTP Authentication: Basic and Digest Access Authentication\n  RFC 2817: Upgrading to TLS Within HTTP/1.1\n  RFC 2818: HTTP Over TLS\n  RFC 3492: Punycode: A Bootstring encoding of Unicode for Internationalized\n   Domain Names in Applications (IDNA)\n  RFC 3986: Uniform Resource Identifier (URI): Generic Syntax\n  RFC 3987: Internationalized Resource Identifiers (IRIs)\n  RFC 4343: Domain Name System (DNS) Case Insensitivity Clarification\n  RFC 5894: Internationalized Domain Names for Applications (IDNA):\n   Background, Explanation, and Rationale\n  RFC 6066: Transport Layer Security (TLS) Extensions: Extension Definitions\n  RFC 6125: Representation and Verification of Domain-Based Application Service\n   Identity within Internet Public Key Infrastructure Using X.509 (PKIX)\n   Certificates in the Context of Transport Layer Security (TLS)\n  RFC 6555: Happy Eyeballs: Success with Dual-Stack Hosts*/\n\ntypedef struct OpusParsedURL   OpusParsedURL;\ntypedef struct OpusStringBuf   OpusStringBuf;\ntypedef struct OpusHTTPConn    OpusHTTPConn;\ntypedef struct OpusHTTPStream  OpusHTTPStream;\n\nstatic char *op_string_range_dup(const char *_start,const char *_end){\n  size_t  len;\n  char   *ret;\n  OP_ASSERT(_start<=_end);\n  len=_end-_start;\n  /*This is to help avoid overflow elsewhere, later.*/\n  if(OP_UNLIKELY(len>=INT_MAX))return NULL;\n  ret=(char *)_ogg_malloc(sizeof(*ret)*(len+1));\n  if(OP_LIKELY(ret!=NULL)){\n    ret=(char *)memcpy(ret,_start,sizeof(*ret)*(len));\n    ret[len]='\\0';\n  }\n  return ret;\n}\n\nstatic char *op_string_dup(const char *_s){\n  return op_string_range_dup(_s,_s+strlen(_s));\n}\n\nstatic char *op_string_tolower(char *_s){\n  int i;\n  for(i=0;_s[i]!='\\0';i++){\n    int c;\n    c=_s[i];\n    if(c>='A'&&c<='Z')c+='a'-'A';\n    _s[i]=(char)c;\n  }\n  return _s;\n}\n\n/*URI character classes (from RFC 3986).*/\n#define OP_URL_ALPHA \\\n \"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz\"\n#define OP_URL_DIGIT       \"0123456789\"\n#define OP_URL_HEXDIGIT    \"0123456789ABCDEFabcdef\"\n/*Not a character class, but the characters allowed in <scheme>.*/\n#define OP_URL_SCHEME      OP_URL_ALPHA OP_URL_DIGIT \"+-.\"\n#define OP_URL_GEN_DELIMS  \"#/:?@[]\"\n#define OP_URL_SUB_DELIMS  \"!$&'()*+,;=\"\n#define OP_URL_RESERVED    OP_URL_GEN_DELIMS OP_URL_SUB_DELIMS\n#define OP_URL_UNRESERVED  OP_URL_ALPHA OP_URL_DIGIT \"-._~\"\n/*Not a character class, but the characters allowed in <pct-encoded>.*/\n#define OP_URL_PCT_ENCODED \"%\"\n/*Not a character class or production rule, but for convenience.*/\n#define OP_URL_PCHAR_BASE \\\n OP_URL_UNRESERVED OP_URL_PCT_ENCODED OP_URL_SUB_DELIMS\n#define OP_URL_PCHAR       OP_URL_PCHAR_BASE \":@\"\n/*Not a character class, but the characters allowed in <userinfo> and\n   <IP-literal>.*/\n#define OP_URL_PCHAR_NA    OP_URL_PCHAR_BASE \":\"\n/*Not a character class, but the characters allowed in <segment-nz-nc>.*/\n#define OP_URL_PCHAR_NC    OP_URL_PCHAR_BASE \"@\"\n/*Not a character clsss, but the characters allowed in <path>.*/\n#define OP_URL_PATH        OP_URL_PCHAR \"/\"\n/*Not a character class, but the characters allowed in <query> / <fragment>.*/\n#define OP_URL_QUERY_FRAG  OP_URL_PCHAR \"/?\"\n\n/*Check the <% HEXDIG HEXDIG> escapes of a URL for validity.\n  Return: 0 if valid, or a negative value on failure.*/\nstatic int op_validate_url_escapes(const char *_s){\n  int i;\n  for(i=0;_s[i];i++){\n    if(_s[i]=='%'){\n      if(OP_UNLIKELY(!isxdigit(_s[i+1]))\n       ||OP_UNLIKELY(!isxdigit(_s[i+2]))\n       /*RFC 3986 says %00 \"should be rejected if the application is not\n          expecting to receive raw data within a component.\"*/\n       ||OP_UNLIKELY(_s[i+1]=='0'&&_s[i+2]=='0')){\n        return OP_FALSE;\n      }\n      i+=2;\n    }\n  }\n  return 0;\n}\n\n/*Convert a hex digit to its actual value.\n  _c: The hex digit to convert.\n      Presumed to be valid ('0'...'9', 'A'...'F', or 'a'...'f').\n  Return: The value of the digit, in the range [0,15].*/\nstatic int op_hex_value(int _c){\n  return _c>='a'?_c-'a'+10:_c>='A'?_c-'A'+10:_c-'0';\n}\n\n/*Unescape all the <% HEXDIG HEXDIG> sequences in a string in-place.\n  This does no validity checking.*/\nstatic char *op_unescape_url_component(char *_s){\n  int i;\n  int j;\n  for(i=j=0;_s[i];i++,j++){\n    if(_s[i]=='%'){\n      _s[i]=(char)(op_hex_value(_s[i+1])<<4|op_hex_value(_s[i+2]));\n      i+=2;\n    }\n  }\n  return _s;\n}\n\n/*Parse a file: URL.\n  This code is not meant to be fast: strspn() with large sets is likely to be\n   slow, but it is very convenient.\n  It is meant to be RFC 1738-compliant (as updated by RFC 3986).*/\nstatic const char *op_parse_file_url(const char *_src){\n  const char *scheme_end;\n  const char *path;\n  const char *path_end;\n  scheme_end=_src+strspn(_src,OP_URL_SCHEME);\n  if(OP_UNLIKELY(*scheme_end!=':')\n   ||scheme_end-_src!=4||op_strncasecmp(_src,\"file\",4)!=0){\n    /*Unsupported protocol.*/\n    return NULL;\n  }\n  /*Make sure all escape sequences are valid to simplify unescaping later.*/\n  if(OP_UNLIKELY(op_validate_url_escapes(scheme_end+1)<0))return NULL;\n  if(scheme_end[1]=='/'&&scheme_end[2]=='/'){\n    const char *host;\n    /*file: URLs can have a host!\n      Yeah, I was surprised, too, but that's what RFC 1738 says.\n      It also says, \"The file URL scheme is unusual in that it does not specify\n       an Internet protocol or access method for such files; as such, its\n       utility in network protocols between hosts is limited,\" which is a mild\n       understatement.*/\n    host=scheme_end+3;\n    /*The empty host is what we expect.*/\n    if(OP_LIKELY(*host=='/'))path=host;\n    else{\n      const char *host_end;\n      char        host_buf[28];\n      /*RFC 1738 says localhost \"is interpreted as `the machine from which the\n         URL is being interpreted,'\" so let's check for it.*/\n      host_end=host+strspn(host,OP_URL_PCHAR_BASE);\n      /*No <port> allowed.\n        This also rejects IP-Literals.*/\n      if(*host_end!='/')return NULL;\n      /*An escaped \"localhost\" can take at most 27 characters.*/\n      if(OP_UNLIKELY(host_end-host>27))return NULL;\n      memcpy(host_buf,host,sizeof(*host_buf)*(host_end-host));\n      host_buf[host_end-host]='\\0';\n      op_unescape_url_component(host_buf);\n      op_string_tolower(host_buf);\n      /*Some other host: give up.*/\n      if(OP_UNLIKELY(strcmp(host_buf,\"localhost\")!=0))return NULL;\n      path=host_end;\n    }\n  }\n  else path=scheme_end+1;\n  path_end=path+strspn(path,OP_URL_PATH);\n  /*This will reject a <query> or <fragment> component, too.\n    I don't know what to do with queries, but a temporal fragment would at\n     least make sense.\n    RFC 1738 pretty clearly defines a <searchpart> that's equivalent to the\n     RFC 3986 <query> component for other schemes, but not the file: scheme,\n     so I'm going to just reject it.*/\n  if(*path_end!='\\0')return NULL;\n  return path;\n}\n\n#if defined(OP_ENABLE_HTTP)\n# if defined(_WIN32)\n#  include <winsock2.h>\n#  include <ws2tcpip.h>\n#  include <openssl/ssl.h>\n#  include <openssl/asn1.h>\n#  include \"winerrno.h\"\n\ntypedef SOCKET op_sock;\n\n#  define OP_INVALID_SOCKET (INVALID_SOCKET)\n\n/*Vista and later support WSAPoll(), but we don't want to rely on that.\n  Instead we re-implement it badly using select().\n  Unfortunately, they define a conflicting struct pollfd, so we only define our\n   own if it looks like that one has not already been defined.*/\n#  if !defined(POLLIN)\n/*Equivalent to POLLIN.*/\n#   define POLLRDNORM (0x0100)\n/*Priority band data can be read.*/\n#   define POLLRDBAND (0x0200)\n/*There is data to read.*/\n#   define POLLIN     (POLLRDNORM|POLLRDBAND)\n/*There is urgent data to read.*/\n#   define POLLPRI    (0x0400)\n/*Equivalent to POLLOUT.*/\n#   define POLLWRNORM (0x0010)\n/*Writing now will not block.*/\n#   define POLLOUT    (POLLWRNORM)\n/*Priority data may be written.*/\n#   define POLLWRBAND (0x0020)\n/*Error condition (output only).*/\n#   define POLLERR    (0x0001)\n/*Hang up (output only).*/\n#   define POLLHUP    (0x0002)\n/*Invalid request: fd not open (output only).*/\n#   define POLLNVAL   (0x0004)\n\nstruct pollfd{\n  /*File descriptor.*/\n  op_sock fd;\n  /*Requested events.*/\n  short   events;\n  /*Returned events.*/\n  short   revents;\n};\n#  endif\n\n/*But Winsock never defines nfds_t (it's simply hard-coded to ULONG).*/\ntypedef unsigned long nfds_t;\n\n/*The usage of FD_SET() below is O(N^2).\n  This is okay because select() is limited to 64 sockets in Winsock, anyway.\n  In practice, we only ever call it with one or two sockets.*/\nstatic int op_poll_win32(struct pollfd *_fds,nfds_t _nfds,int _timeout){\n  struct timeval tv;\n  fd_set         ifds;\n  fd_set         ofds;\n  fd_set         efds;\n  nfds_t         i;\n  int            ret;\n  FD_ZERO(&ifds);\n  FD_ZERO(&ofds);\n  FD_ZERO(&efds);\n  for(i=0;i<_nfds;i++){\n    _fds[i].revents=0;\n    if(_fds[i].events&POLLIN)FD_SET(_fds[i].fd,&ifds);\n    if(_fds[i].events&POLLOUT)FD_SET(_fds[i].fd,&ofds);\n    FD_SET(_fds[i].fd,&efds);\n  }\n  if(_timeout>=0){\n    tv.tv_sec=_timeout/1000;\n    tv.tv_usec=(_timeout%1000)*1000;\n  }\n  ret=select(-1,&ifds,&ofds,&efds,_timeout<0?NULL:&tv);\n  if(ret>0){\n    for(i=0;i<_nfds;i++){\n      if(FD_ISSET(_fds[i].fd,&ifds))_fds[i].revents|=POLLIN;\n      if(FD_ISSET(_fds[i].fd,&ofds))_fds[i].revents|=POLLOUT;\n      /*This isn't correct: there are several different things that might have\n         happened to a fd in efds, but I don't know a good way to distinguish\n         them without more context from the caller.\n        It's okay, because we don't actually check any of these bits, we just\n         need _some_ bit set.*/\n      if(FD_ISSET(_fds[i].fd,&efds))_fds[i].revents|=POLLHUP;\n    }\n  }\n  return ret;\n}\n\n/*We define op_errno() to make it clear that it's not an l-value like normal\n   errno is.*/\n#  define op_errno() (WSAGetLastError()?WSAGetLastError()-WSABASEERR:0)\n#  define op_reset_errno() (WSASetLastError(0))\n\n/*The remaining functions don't get an op_ prefix even though they only\n   operate on sockets, because we don't use non-socket I/O here, and this\n   minimizes the changes needed to deal with Winsock.*/\n#  define close(_fd) closesocket(_fd)\n/*This takes an int for the address length, even though the value is of type\n   socklen_t (defined as an unsigned integer type with at least 32 bits).*/\n#  define connect(_fd,_addr,_addrlen) \\\n (OP_UNLIKELY((_addrlen)>(socklen_t)INT_MAX)? \\\n  WSASetLastError(WSA_NOT_ENOUGH_MEMORY),-1: \\\n  connect(_fd,_addr,(int)(_addrlen)))\n/*This relies on sizeof(u_long)==sizeof(int), which is always true on both\n   Win32 and Win64.*/\n#  define ioctl(_fd,_req,_arg) ioctlsocket(_fd,_req,(u_long *)(_arg))\n#  define getsockopt(_fd,_level,_name,_val,_len) \\\n getsockopt(_fd,_level,_name,(char *)(_val),_len)\n#  define setsockopt(_fd,_level,_name,_val,_len) \\\n setsockopt(_fd,_level,_name,(const char *)(_val),_len)\n#  define poll(_fds,_nfds,_timeout) op_poll_win32(_fds,_nfds,_timeout)\n\n#  if defined(_MSC_VER)\ntypedef ptrdiff_t ssize_t;\n#  endif\n\n/*Load certificates from the built-in certificate store.*/\nint SSL_CTX_set_default_verify_paths_win32(SSL_CTX *_ssl_ctx);\n#  define SSL_CTX_set_default_verify_paths \\\n SSL_CTX_set_default_verify_paths_win32\n\n# else\n/*Normal Berkeley sockets.*/\n#  include <sys/ioctl.h>\n#  include <sys/types.h>\n#  include <sys/socket.h>\n#  include <arpa/inet.h>\n#  include <netinet/in.h>\n#  include <netinet/tcp.h>\n#  include <fcntl.h>\n#  include <netdb.h>\n#  include <poll.h>\n#  include <unistd.h>\n#  include <openssl/ssl.h>\n#  include <openssl/asn1.h>\n\ntypedef int op_sock;\n\n#  define OP_INVALID_SOCKET (-1)\n\n#  define op_errno() (errno)\n#  define op_reset_errno() (errno=0)\n\n# endif\n# include <sys/timeb.h>\n# include <openssl/x509v3.h>\n\n/*The maximum number of simultaneous connections.\n  RFC 2616 says this SHOULD NOT be more than 2, but everyone on the modern web\n   ignores that (e.g., IE 8 bumped theirs up from 2 to 6, Firefox uses 15).\n  If it makes you feel better, we'll only ever actively read from one of these\n   at a time.\n  The others are kept around mainly to avoid slow-starting a new connection\n   when seeking, and time out rapidly.*/\n# define OP_NCONNS_MAX (4)\n\n/*The amount of time before we attempt to re-resolve the host.\n  This is 10 minutes, as recommended in RFC 6555 for expiring cached connection\n   results for dual-stack hosts.*/\n# define OP_RESOLVE_CACHE_TIMEOUT_MS (10*60*(opus_int32)1000)\n\n/*The number of redirections at which we give up.\n  The value here is the current default in Firefox.\n  RFC 2068 mandated a maximum of 5, but RFC 2616 relaxed that to \"a client\n   SHOULD detect infinite redirection loops.\"\n  Fortunately, 20 is less than infinity.*/\n# define OP_REDIRECT_LIMIT (20)\n\n/*The initial size of the buffer used to read a response message (before the\n   body).*/\n# define OP_RESPONSE_SIZE_MIN (510)\n/*The maximum size of a response message (before the body).\n  Responses larger than this will be discarded.\n  I've seen a real server return 20 kB of data for a 302 Found response.\n  Increasing this beyond 32kB will cause problems on platforms with a 16-bit\n   int.*/\n# define OP_RESPONSE_SIZE_MAX (32766)\n\n/*The number of milliseconds we will allow a connection to sit idle before we\n   refuse to resurrect it.\n  Apache as of 2.2 has reduced its default timeout to 5 seconds (from 15), so\n   that's what we'll use here.*/\n# define OP_CONNECTION_IDLE_TIMEOUT_MS (5*1000)\n\n/*The number of milliseconds we will wait to send or receive data before giving\n   up.*/\n# define OP_POLL_TIMEOUT_MS (30*1000)\n\n/*We will always attempt to read ahead at least this much in preference to\n   opening a new connection.*/\n# define OP_READAHEAD_THRESH_MIN (32*(opus_int32)1024)\n\n/*The amount of data to request after a seek.\n  This is a trade-off between read throughput after a seek vs. the the ability\n   to quickly perform another seek with the same connection.*/\n# define OP_PIPELINE_CHUNK_SIZE     (32*(opus_int32)1024)\n/*Subsequent chunks are requested with larger and larger sizes until they pass\n   this threshold, after which we just ask for the rest of the resource.*/\n# define OP_PIPELINE_CHUNK_SIZE_MAX (1024*(opus_int32)1024)\n/*This is the maximum number of requests we'll make with a single connection.\n  Many servers will simply disconnect after we attempt some number of requests,\n   possibly without sending a Connection: close header, meaning we won't\n   discover it until we try to read beyond the end of the current chunk.\n  We can reconnect when that happens, but this is slow.\n  Instead, we impose a limit ourselves (set to the default for Apache\n   installations and thus likely the most common value in use).*/\n# define OP_PIPELINE_MAX_REQUESTS   (100)\n/*This should be the number of requests, starting from a chunk size of\n   OP_PIPELINE_CHUNK_SIZE and doubling each time, until we exceed\n   OP_PIPELINE_CHUNK_SIZE_MAX and just request the rest of the file.\n  We won't reuse a connection when seeking unless it has at least this many\n   requests left, to reduce the chances we'll have to open a new connection\n   while reading forward afterwards.*/\n# define OP_PIPELINE_MIN_REQUESTS   (7)\n\n/*Is this an https URL?\n  For now we can simply check the last letter of the scheme.*/\n# define OP_URL_IS_SSL(_url) ((_url)->scheme[4]=='s')\n\n/*Does this URL use the default port for its scheme?*/\n# define OP_URL_IS_DEFAULT_PORT(_url) \\\n (!OP_URL_IS_SSL(_url)&&(_url)->port==80 \\\n ||OP_URL_IS_SSL(_url)&&(_url)->port==443)\n\nstruct OpusParsedURL{\n  /*Either \"http\" or \"https\".*/\n  char     *scheme;\n  /*The user name from the <userinfo> component, or NULL.*/\n  char     *user;\n  /*The password from the <userinfo> component, or NULL.*/\n  char     *pass;\n  /*The <host> component.\n    This may not be NULL.*/\n  char     *host;\n  /*The <path> and <query> components.\n    This may not be NULL.*/\n  char     *path;\n  /*The <port> component.\n    This is set to the default port if the URL did not contain one.*/\n  unsigned  port;\n};\n\n/*Parse a URL.\n  This code is not meant to be fast: strspn() with large sets is likely to be\n   slow, but it is very convenient.\n  It is meant to be RFC 3986-compliant.\n  We currently do not support IRIs (Internationalized Resource Identifiers,\n   RFC 3987).\n  Callers should translate them to URIs first.*/\nstatic int op_parse_url_impl(OpusParsedURL *_dst,const char *_src){\n  const char  *scheme_end;\n  const char  *authority;\n  const char  *userinfo_end;\n  const char  *user;\n  const char  *user_end;\n  const char  *pass;\n  const char  *hostport;\n  const char  *hostport_end;\n  const char  *host_end;\n  const char  *port;\n  opus_int32   port_num;\n  const char  *port_end;\n  const char  *path;\n  const char  *path_end;\n  const char  *uri_end;\n  scheme_end=_src+strspn(_src,OP_URL_SCHEME);\n  if(OP_UNLIKELY(*scheme_end!=':')\n   ||OP_UNLIKELY(scheme_end-_src<4)||OP_UNLIKELY(scheme_end-_src>5)\n   ||OP_UNLIKELY(op_strncasecmp(_src,\"https\",(int)(scheme_end-_src))!=0)){\n    /*Unsupported protocol.*/\n    return OP_EIMPL;\n  }\n  if(OP_UNLIKELY(scheme_end[1]!='/')||OP_UNLIKELY(scheme_end[2]!='/')){\n    /*We require an <authority> component.*/\n    return OP_EINVAL;\n  }\n  authority=scheme_end+3;\n  /*Make sure all escape sequences are valid to simplify unescaping later.*/\n  if(OP_UNLIKELY(op_validate_url_escapes(authority)<0))return OP_EINVAL;\n  /*Look for a <userinfo> component.*/\n  userinfo_end=authority+strspn(authority,OP_URL_PCHAR_NA);\n  if(*userinfo_end=='@'){\n    /*Found one.*/\n    user=authority;\n    /*Look for a password (yes, clear-text passwords are deprecated, I know,\n       but what else are people supposed to use? use SSL if you care).*/\n    user_end=authority+strspn(authority,OP_URL_PCHAR_BASE);\n    if(*user_end==':')pass=user_end+1;\n    else pass=NULL;\n    hostport=userinfo_end+1;\n  }\n  else{\n    /*We shouldn't have to initialize user_end, but gcc is too dumb to figure\n       out that user!=NULL below means we didn't take this else branch.*/\n    user=user_end=NULL;\n    pass=NULL;\n    hostport=authority;\n  }\n  /*Try to figure out where the <host> component ends.*/\n  if(hostport[0]=='['){\n    hostport++;\n    /*We have an <IP-literal>, which can contain colons.*/\n    hostport_end=host_end=hostport+strspn(hostport,OP_URL_PCHAR_NA);\n    if(OP_UNLIKELY(*hostport_end++!=']'))return OP_EINVAL;\n  }\n  /*Currently we don't support IDNA (RFC 5894), because I don't want to deal\n     with the policy about which domains should not be internationalized to\n     avoid confusing similarities.\n    Give this API Punycode (RFC 3492) domain names instead.*/\n  else hostport_end=host_end=hostport+strspn(hostport,OP_URL_PCHAR_BASE);\n  /*TODO: Validate host.*/\n  /*Is there a port number?*/\n  port_num=-1;\n  if(*hostport_end==':'){\n    int i;\n    port=hostport_end+1;\n    port_end=port+strspn(port,OP_URL_DIGIT);\n    path=port_end;\n    /*Not part of RFC 3986, but require port numbers in the range 0...65535.*/\n    if(OP_LIKELY(port_end-port>0)){\n      while(*port=='0')port++;\n      if(OP_UNLIKELY(port_end-port>5))return OP_EINVAL;\n      port_num=0;\n      for(i=0;i<port_end-port;i++)port_num=port_num*10+port[i]-'0';\n      if(OP_UNLIKELY(port_num>65535))return OP_EINVAL;\n    }\n  }\n  else path=hostport_end;\n  path_end=path+strspn(path,OP_URL_PATH);\n  /*If the path is not empty, it must begin with a '/'.*/\n  if(OP_LIKELY(path_end>path)&&OP_UNLIKELY(path[0]!='/'))return OP_EINVAL;\n  /*Consume the <query> component, if any (right now we don't split this out\n     from the <path> component).*/\n  if(*path_end=='?')path_end=path_end+strspn(path_end,OP_URL_QUERY_FRAG);\n  /*Discard the <fragment> component, if any.\n    This doesn't get sent to the server.\n    Some day we should add support for Media Fragment URIs\n     <http://www.w3.org/TR/media-frags/>.*/\n  if(*path_end=='#')uri_end=path_end+1+strspn(path_end+1,OP_URL_QUERY_FRAG);\n  else uri_end=path_end;\n  /*If there's anything left, this was not a valid URL.*/\n  if(OP_UNLIKELY(*uri_end!='\\0'))return OP_EINVAL;\n  _dst->scheme=op_string_range_dup(_src,scheme_end);\n  if(OP_UNLIKELY(_dst->scheme==NULL))return OP_EFAULT;\n  op_string_tolower(_dst->scheme);\n  if(user!=NULL){\n    _dst->user=op_string_range_dup(user,user_end);\n    if(OP_UNLIKELY(_dst->user==NULL))return OP_EFAULT;\n    op_unescape_url_component(_dst->user);\n    /*Unescaping might have created a ':' in the username.\n      That's not allowed by RFC 2617's Basic Authentication Scheme.*/\n    if(OP_UNLIKELY(strchr(_dst->user,':')!=NULL))return OP_EINVAL;\n  }\n  else _dst->user=NULL;\n  if(pass!=NULL){\n    _dst->pass=op_string_range_dup(pass,userinfo_end);\n    if(OP_UNLIKELY(_dst->pass==NULL))return OP_EFAULT;\n    op_unescape_url_component(_dst->pass);\n  }\n  else _dst->pass=NULL;\n  _dst->host=op_string_range_dup(hostport,host_end);\n  if(OP_UNLIKELY(_dst->host==NULL))return OP_EFAULT;\n  if(port_num<0){\n    if(_src[4]=='s')port_num=443;\n    else port_num=80;\n  }\n  _dst->port=(unsigned)port_num;\n  /*RFC 2616 says an empty <abs-path> component is equivalent to \"/\", and we\n     MUST use the latter in the Request-URI.\n    Reserve space for the slash here.*/\n  if(path==path_end||path[0]=='?')path--;\n  _dst->path=op_string_range_dup(path,path_end);\n  if(OP_UNLIKELY(_dst->path==NULL))return OP_EFAULT;\n  /*And force-set it here.*/\n  _dst->path[0]='/';\n  return 0;\n}\n\nstatic void op_parsed_url_init(OpusParsedURL *_url){\n  memset(_url,0,sizeof(*_url));\n}\n\nstatic void op_parsed_url_clear(OpusParsedURL *_url){\n  _ogg_free(_url->scheme);\n  _ogg_free(_url->user);\n  _ogg_free(_url->pass);\n  _ogg_free(_url->host);\n  _ogg_free(_url->path);\n}\n\nstatic int op_parse_url(OpusParsedURL *_dst,const char *_src){\n  OpusParsedURL url;\n  int           ret;\n  op_parsed_url_init(&url);\n  ret=op_parse_url_impl(&url,_src);\n  if(OP_UNLIKELY(ret<0))op_parsed_url_clear(&url);\n  else *_dst=*&url;\n  return ret;\n}\n\n/*A buffer to hold growing strings.\n  The main purpose of this is to consolidate allocation checks and simplify\n   cleanup on a failed allocation.*/\nstruct OpusStringBuf{\n  char *buf;\n  int   nbuf;\n  int   cbuf;\n};\n\nstatic void op_sb_init(OpusStringBuf *_sb){\n  _sb->buf=NULL;\n  _sb->nbuf=0;\n  _sb->cbuf=0;\n}\n\nstatic void op_sb_clear(OpusStringBuf *_sb){\n  _ogg_free(_sb->buf);\n}\n\n/*Make sure we have room for at least _capacity characters (plus 1 more for the\n   terminating NUL).*/\nstatic int op_sb_ensure_capacity(OpusStringBuf *_sb,int _capacity){\n  char *buf;\n  int   cbuf;\n  buf=_sb->buf;\n  cbuf=_sb->cbuf;\n  if(_capacity>=cbuf-1){\n    if(OP_UNLIKELY(cbuf>INT_MAX-1>>1))return OP_EFAULT;\n    if(OP_UNLIKELY(_capacity>=INT_MAX-1))return OP_EFAULT;\n    cbuf=OP_MAX(2*cbuf+1,_capacity+1);\n    buf=_ogg_realloc(buf,sizeof(*buf)*cbuf);\n    if(OP_UNLIKELY(buf==NULL))return OP_EFAULT;\n    _sb->buf=buf;\n    _sb->cbuf=cbuf;\n  }\n  return 0;\n}\n\n/*Increase the capacity of the buffer, but not to more than _max_size\n   characters (plus 1 more for the terminating NUL).*/\nstatic int op_sb_grow(OpusStringBuf *_sb,int _max_size){\n  char *buf;\n  int   cbuf;\n  buf=_sb->buf;\n  cbuf=_sb->cbuf;\n  OP_ASSERT(_max_size<=INT_MAX-1);\n  cbuf=cbuf<=_max_size-1>>1?2*cbuf+1:_max_size+1;\n  buf=_ogg_realloc(buf,sizeof(*buf)*cbuf);\n  if(OP_UNLIKELY(buf==NULL))return OP_EFAULT;\n  _sb->buf=buf;\n  _sb->cbuf=cbuf;\n  return 0;\n}\n\nstatic int op_sb_append(OpusStringBuf *_sb,const char *_s,int _len){\n  char *buf;\n  int   nbuf;\n  int   ret;\n  nbuf=_sb->nbuf;\n  if(OP_UNLIKELY(nbuf>INT_MAX-_len))return OP_EFAULT;\n  ret=op_sb_ensure_capacity(_sb,nbuf+_len);\n  if(OP_UNLIKELY(ret<0))return ret;\n  buf=_sb->buf;\n  memcpy(buf+nbuf,_s,sizeof(*buf)*_len);\n  nbuf+=_len;\n  buf[nbuf]='\\0';\n  _sb->nbuf=nbuf;\n  return 0;\n}\n\nstatic int op_sb_append_string(OpusStringBuf *_sb,const char *_s){\n  size_t len;\n  len=strlen(_s);\n  if(OP_UNLIKELY(len>(size_t)INT_MAX))return OP_EFAULT;\n  return op_sb_append(_sb,_s,(int)len);\n}\n\nstatic int op_sb_append_port(OpusStringBuf *_sb,unsigned _port){\n  char port_buf[7];\n  OP_ASSERT(_port<=65535U);\n  sprintf(port_buf,\":%u\",_port);\n  return op_sb_append_string(_sb,port_buf);\n}\n\nstatic int op_sb_append_nonnegative_int64(OpusStringBuf *_sb,opus_int64 _i){\n  char digit;\n  int  nbuf_start;\n  int  ret;\n  OP_ASSERT(_i>=0);\n  nbuf_start=_sb->nbuf;\n  ret=0;\n  do{\n    digit='0'+_i%10;\n    ret|=op_sb_append(_sb,&digit,1);\n    _i/=10;\n  }\n  while(_i>0);\n  if(OP_LIKELY(ret>=0)){\n    char *buf;\n    int   nbuf_end;\n    buf=_sb->buf;\n    nbuf_end=_sb->nbuf-1;\n    /*We've added the digits backwards.\n      Reverse them.*/\n    while(nbuf_start<nbuf_end){\n      digit=buf[nbuf_start];\n      buf[nbuf_start]=buf[nbuf_end];\n      buf[nbuf_end]=digit;\n      nbuf_start++;\n      nbuf_end--;\n    }\n  }\n  return ret;\n}\n\nstatic struct addrinfo *op_resolve(const char *_host,unsigned _port){\n  struct addrinfo *addrs;\n  struct addrinfo  hints;\n  char             service[6];\n  memset(&hints,0,sizeof(hints));\n  hints.ai_socktype=SOCK_STREAM;\n#if defined(AI_NUMERICSERV)\n  hints.ai_flags=AI_NUMERICSERV;\n#endif\n  OP_ASSERT(_port<=65535U);\n  sprintf(service,\"%u\",_port);\n  if(OP_LIKELY(!getaddrinfo(_host,service,&hints,&addrs)))return addrs;\n  return NULL;\n}\n\nstatic int op_sock_set_nonblocking(op_sock _fd,int _nonblocking){\n#if !defined(_WIN32)\n  int flags;\n  flags=fcntl(_fd,F_GETFL);\n  if(OP_UNLIKELY(flags<0))return flags;\n  if(_nonblocking)flags|=O_NONBLOCK;\n  else flags&=~O_NONBLOCK;\n  return fcntl(_fd,F_SETFL,flags);\n#else\n  return ioctl(_fd,FIONBIO,&_nonblocking);\n#endif\n}\n\n/*Disable/enable write coalescing if we can.\n  We always send whole requests at once and always parse the response headers\n   before sending another one, so normally write coalescing just causes added\n   delay.*/\nstatic void op_sock_set_tcp_nodelay(op_sock _fd,int _nodelay){\n# if defined(TCP_NODELAY)&&(defined(IPPROTO_TCP)||defined(SOL_TCP))\n#  if defined(IPPROTO_TCP)\n#   define OP_SO_LEVEL IPPROTO_TCP\n#  else\n#   define OP_SO_LEVEL SOL_TCP\n#  endif\n  /*It doesn't really matter if this call fails, but it would be interesting\n     to hit a case where it does.*/\n  OP_ALWAYS_TRUE(!setsockopt(_fd,OP_SO_LEVEL,TCP_NODELAY,\n   &_nodelay,sizeof(_nodelay)));\n# endif\n}\n\n#if defined(_WIN32)\nstatic void op_init_winsock(){\n  static LONG    count;\n  static WSADATA wsadata;\n  if(InterlockedIncrement(&count)==1)WSAStartup(0x0202,&wsadata);\n}\n#endif\n\n/*A single physical connection to an HTTP server.\n  We may have several of these open at once.*/\nstruct OpusHTTPConn{\n  /*The current position indicator for this connection.*/\n  opus_int64    pos;\n  /*The position where the current request will end, or -1 if we're reading\n     until EOF (an unseekable stream or the initial HTTP/1.0 request).*/\n  opus_int64    end_pos;\n  /*The position where next request we've sent will start, or -1 if we haven't\n     sent the next request yet.*/\n  opus_int64    next_pos;\n  /*The end of the next request or -1 if we requested the rest of the resource.\n    This is only set to a meaningful value if next_pos is not -1.*/\n  opus_int64    next_end;\n  /*The SSL connection, if this is https.*/\n  SSL          *ssl_conn;\n  /*The next connection in either the LRU or free list.*/\n  OpusHTTPConn *next;\n  /*The last time we blocked for reading from this connection.*/\n  struct timeb  read_time;\n  /*The number of bytes we've read since the last time we blocked.*/\n  opus_int64    read_bytes;\n  /*The estimated throughput of this connection, in bytes/s.*/\n  opus_int64    read_rate;\n  /*The socket we're reading from.*/\n  op_sock       fd;\n  /*The number of remaining requests we are allowed on this connection.*/\n  int           nrequests_left;\n  /*The chunk size to use for pipelining requests.*/\n  opus_int32    chunk_size;\n};\n\nstatic void op_http_conn_init(OpusHTTPConn *_conn){\n  _conn->next_pos=-1;\n  _conn->ssl_conn=NULL;\n  _conn->next=NULL;\n  _conn->fd=OP_INVALID_SOCKET;\n}\n\nstatic void op_http_conn_clear(OpusHTTPConn *_conn){\n  if(_conn->ssl_conn!=NULL)SSL_free(_conn->ssl_conn);\n  /*SSL frees the BIO for us.*/\n  if(_conn->fd!=OP_INVALID_SOCKET)close(_conn->fd);\n}\n\n/*The global stream state.*/\nstruct OpusHTTPStream{\n  /*The list of connections.*/\n  OpusHTTPConn     conns[OP_NCONNS_MAX];\n  /*The context object used as a framework for TLS/SSL functions.*/\n  SSL_CTX         *ssl_ctx;\n  /*The cached session to reuse for future connections.*/\n  SSL_SESSION     *ssl_session;\n  /*The LRU list (ordered from MRU to LRU) of currently connected\n     connections.*/\n  OpusHTTPConn    *lru_head;\n  /*The free list.*/\n  OpusHTTPConn    *free_head;\n  /*The URL to connect to.*/\n  OpusParsedURL    url;\n  /*Information about the address we connected to.*/\n  struct addrinfo  addr_info;\n  /*The address we connected to.*/\n  union{\n    struct sockaddr     s;\n    struct sockaddr_in  v4;\n    struct sockaddr_in6 v6;\n  }                addr;\n  /*The last time we re-resolved the host.*/\n  struct timeb     resolve_time;\n  /*A buffer used to build HTTP requests.*/\n  OpusStringBuf    request;\n  /*A buffer used to build proxy CONNECT requests.*/\n  OpusStringBuf    proxy_connect;\n  /*A buffer used to receive the response headers.*/\n  OpusStringBuf    response;\n  /*The Content-Length, if specified, or -1 otherwise.\n    This will always be specified for seekable streams.*/\n  opus_int64       content_length;\n  /*The position indicator used when no connection is active.*/\n  opus_int64       pos;\n  /*The host we actually connected to.*/\n  char            *connect_host;\n  /*The port we actually connected to.*/\n  unsigned         connect_port;\n  /*The connection we're currently reading from.\n    This can be -1 if no connection is active.*/\n  int              cur_conni;\n  /*Whether or not the server supports range requests.*/\n  int              seekable;\n  /*Whether or not the server supports HTTP/1.1 with persistent connections.*/\n  int              pipeline;\n  /*Whether or not we should skip certificate checks.*/\n  int              skip_certificate_check;\n  /*The offset of the tail of the request.\n    Only the offset in the Range: header appears after this, allowing us to\n     quickly edit the request to ask for a new range.*/\n  int              request_tail;\n  /*The estimated time required to open a new connection, in milliseconds.*/\n  opus_int32       connect_rate;\n};\n\nstatic void op_http_stream_init(OpusHTTPStream *_stream){\n  OpusHTTPConn **pnext;\n  int            ci;\n  pnext=&_stream->free_head;\n  for(ci=0;ci<OP_NCONNS_MAX;ci++){\n    op_http_conn_init(_stream->conns+ci);\n    *pnext=_stream->conns+ci;\n    pnext=&_stream->conns[ci].next;\n  }\n  _stream->ssl_ctx=NULL;\n  _stream->ssl_session=NULL;\n  _stream->lru_head=NULL;\n  op_parsed_url_init(&_stream->url);\n  op_sb_init(&_stream->request);\n  op_sb_init(&_stream->proxy_connect);\n  op_sb_init(&_stream->response);\n  _stream->connect_host=NULL;\n  _stream->seekable=0;\n}\n\n/*Close the connection and move it to the free list.\n  _stream:     The stream containing the free list.\n  _conn:       The connection to close.\n  _pnext:      The linked-list pointer currently pointing to this connection.\n  _gracefully: Whether or not to shut down cleanly.*/\nstatic void op_http_conn_close(OpusHTTPStream *_stream,OpusHTTPConn *_conn,\n OpusHTTPConn **_pnext,int _gracefully){\n  /*If we don't shut down gracefully, the server MUST NOT re-use our session\n     according to RFC 2246, because it can't tell the difference between an\n     abrupt close and a truncation attack.\n    So we shut down gracefully if we can.\n    However, we will not wait if this would block (it's not worth the savings\n     from session resumption to do so).\n    Clients (that's us) MAY resume a TLS session that ended with an incomplete\n     close, according to RFC 2818, so there's no reason to make sure the server\n     shut things down gracefully.*/\n  if(_gracefully&&_conn->ssl_conn!=NULL)SSL_shutdown(_conn->ssl_conn);\n  op_http_conn_clear(_conn);\n  _conn->next_pos=-1;\n  _conn->ssl_conn=NULL;\n  _conn->fd=OP_INVALID_SOCKET;\n  OP_ASSERT(*_pnext==_conn);\n  *_pnext=_conn->next;\n  _conn->next=_stream->free_head;\n  _stream->free_head=_conn;\n}\n\nstatic void op_http_stream_clear(OpusHTTPStream *_stream){\n  while(_stream->lru_head!=NULL){\n    op_http_conn_close(_stream,_stream->lru_head,&_stream->lru_head,0);\n  }\n  if(_stream->ssl_session!=NULL)SSL_SESSION_free(_stream->ssl_session);\n  if(_stream->ssl_ctx!=NULL)SSL_CTX_free(_stream->ssl_ctx);\n  op_sb_clear(&_stream->response);\n  op_sb_clear(&_stream->proxy_connect);\n  op_sb_clear(&_stream->request);\n  if(_stream->connect_host!=_stream->url.host)_ogg_free(_stream->connect_host);\n  op_parsed_url_clear(&_stream->url);\n}\n\nstatic int op_http_conn_write_fully(OpusHTTPConn *_conn,\n const char *_buf,int _buf_size){\n  struct pollfd  fd;\n  SSL           *ssl_conn;\n  fd.fd=_conn->fd;\n  ssl_conn=_conn->ssl_conn;\n  while(_buf_size>0){\n    int err;\n    if(ssl_conn!=NULL){\n      int ret;\n      ret=SSL_write(ssl_conn,_buf,_buf_size);\n      if(ret>0){\n        /*Wrote some data.*/\n        _buf+=ret;\n        _buf_size-=ret;\n        continue;\n      }\n      /*Connection closed.*/\n      else if(ret==0)return OP_FALSE;\n      err=SSL_get_error(ssl_conn,ret);\n      /*Yes, renegotiations can cause SSL_write() to block for reading.*/\n      if(err==SSL_ERROR_WANT_READ)fd.events=POLLIN;\n      else if(err==SSL_ERROR_WANT_WRITE)fd.events=POLLOUT;\n      else return OP_FALSE;\n    }\n    else{\n      ssize_t ret;\n      op_reset_errno();\n      ret=send(fd.fd,_buf,_buf_size,0);\n      if(ret>0){\n        _buf+=ret;\n        OP_ASSERT(ret<=_buf_size);\n        _buf_size-=(int)ret;\n        continue;\n      }\n      err=op_errno();\n      if(err!=EAGAIN&&err!=EWOULDBLOCK)return OP_FALSE;\n      fd.events=POLLOUT;\n    }\n    if(poll(&fd,1,OP_POLL_TIMEOUT_MS)<=0)return OP_FALSE;\n  }\n  return 0;\n}\n\nstatic int op_http_conn_estimate_available(OpusHTTPConn *_conn){\n  int available;\n  int ret;\n  ret=ioctl(_conn->fd,FIONREAD,&available);\n  if(ret<0)available=0;\n  /*This requires the SSL read_ahead flag to be unset to work.\n    We ignore partial records as well as the protocol overhead for any pending\n     bytes.\n    This means we might return somewhat less than can truly be read without\n     blocking (if there's a partial record).\n    This is okay, because we're using this value to estimate network transfer\n     time, and we _have_ already received those bytes.\n    We also might return slightly more (due to protocol overhead), but that's\n     small enough that it probably doesn't matter.*/\n  if(_conn->ssl_conn!=NULL)available+=SSL_pending(_conn->ssl_conn);\n  return available;\n}\n\nstatic opus_int32 op_time_diff_ms(const struct timeb *_end,\n const struct timeb *_start){\n  opus_int64 dtime;\n  dtime=_end->time-(opus_int64)_start->time;\n  OP_ASSERT(_end->millitm<1000);\n  OP_ASSERT(_start->millitm<1000);\n  if(OP_UNLIKELY(dtime>(OP_INT32_MAX-1000)/1000))return OP_INT32_MAX;\n  if(OP_UNLIKELY(dtime<(OP_INT32_MIN+1000)/1000))return OP_INT32_MIN;\n  return (opus_int32)dtime*1000+_end->millitm-_start->millitm;\n}\n\n/*Update the read rate estimate for this connection.*/\nstatic void op_http_conn_read_rate_update(OpusHTTPConn *_conn){\n  struct timeb read_time;\n  opus_int32   read_delta_ms;\n  opus_int64   read_delta_bytes;\n  opus_int64   read_rate;\n  read_delta_bytes=_conn->read_bytes;\n  if(read_delta_bytes<=0)return;\n  ftime(&read_time);\n  read_delta_ms=op_time_diff_ms(&read_time,&_conn->read_time);\n  read_rate=_conn->read_rate;\n  read_delta_ms=OP_MAX(read_delta_ms,1);\n  read_rate+=read_delta_bytes*1000/read_delta_ms-read_rate+4>>3;\n  *&_conn->read_time=*&read_time;\n  _conn->read_bytes=0;\n  _conn->read_rate=read_rate;\n}\n\n/*Tries to read from the given connection.\n  [out] _buf: Returns the data read.\n  _buf_size:  The size of the buffer.\n  _blocking:  Whether or not to block until some data is retrieved.\n  Return: A positive number of bytes read on success.\n          0:        The read would block, or the connection was closed.\n          OP_EREAD: There was a fatal read error.*/\nstatic int op_http_conn_read(OpusHTTPConn *_conn,\n char *_buf,int _buf_size,int _blocking){\n  struct pollfd  fd;\n  SSL           *ssl_conn;\n  int            nread;\n  int            nread_unblocked;\n  fd.fd=_conn->fd;\n  ssl_conn=_conn->ssl_conn;\n  nread=nread_unblocked=0;\n  /*RFC 2818 says \"client implementations MUST treat any premature closes as\n     errors and the data received as potentially truncated,\" so we make very\n     sure to report read errors upwards.*/\n  do{\n    int err;\n    if(ssl_conn!=NULL){\n      int ret;\n      ret=SSL_read(ssl_conn,_buf+nread,_buf_size-nread);\n      OP_ASSERT(ret<=_buf_size-nread);\n      if(ret>0){\n        /*Read some data.\n          Keep going to see if there's more.*/\n        nread+=ret;\n        nread_unblocked+=ret;\n        continue;\n      }\n      /*If we already read some data, return it right now.*/\n      if(nread>0)break;\n      err=SSL_get_error(ssl_conn,ret);\n      if(ret==0){\n        /*Connection close.\n          Check for a clean shutdown to prevent truncation attacks.\n          This check always succeeds for SSLv2, as it has no \"close notify\"\n           message and thus can't verify an orderly shutdown.*/\n        return err==SSL_ERROR_ZERO_RETURN?0:OP_EREAD;\n      }\n      if(err==SSL_ERROR_WANT_READ)fd.events=POLLIN;\n      /*Yes, renegotiations can cause SSL_read() to block for writing.*/\n      else if(err==SSL_ERROR_WANT_WRITE)fd.events=POLLOUT;\n      /*Some other error.*/\n      else return OP_EREAD;\n    }\n    else{\n      ssize_t ret;\n      op_reset_errno();\n      ret=recv(fd.fd,_buf+nread,_buf_size-nread,0);\n      OP_ASSERT(ret<=_buf_size-nread);\n      if(ret>0){\n        /*Read some data.\n          Keep going to see if there's more.*/\n        OP_ASSERT(ret<=_buf_size-nread);\n        nread+=(int)ret;\n        nread_unblocked+=(int)ret;\n        continue;\n      }\n      /*If we already read some data or the connection was closed, return\n         right now.*/\n      if(ret==0||nread>0)break;\n      err=op_errno();\n      if(err!=EAGAIN&&err!=EWOULDBLOCK)return OP_EREAD;\n      fd.events=POLLIN;\n    }\n    _conn->read_bytes+=nread_unblocked;\n    op_http_conn_read_rate_update(_conn);\n    nread_unblocked=0;\n    if(!_blocking)break;\n    /*Need to wait to get any data at all.*/\n    if(poll(&fd,1,OP_POLL_TIMEOUT_MS)<=0)return OP_EREAD;\n  }\n  while(nread<_buf_size);\n  _conn->read_bytes+=nread_unblocked;\n  return nread;\n}\n\n/*Tries to look at the pending data for a connection without consuming it.\n  [out] _buf: Returns the data at which we're peeking.\n  _buf_size:  The size of the buffer.*/\nstatic int op_http_conn_peek(OpusHTTPConn *_conn,char *_buf,int _buf_size){\n  struct pollfd   fd;\n  SSL            *ssl_conn;\n  int             ret;\n  fd.fd=_conn->fd;\n  ssl_conn=_conn->ssl_conn;\n  for(;;){\n    int err;\n    if(ssl_conn!=NULL){\n      ret=SSL_peek(ssl_conn,_buf,_buf_size);\n      /*Either saw some data or the connection was closed.*/\n      if(ret>=0)return ret;\n      err=SSL_get_error(ssl_conn,ret);\n      if(err==SSL_ERROR_WANT_READ)fd.events=POLLIN;\n      /*Yes, renegotiations can cause SSL_peek() to block for writing.*/\n      else if(err==SSL_ERROR_WANT_WRITE)fd.events=POLLOUT;\n      else return 0;\n    }\n    else{\n      op_reset_errno();\n      ret=(int)recv(fd.fd,_buf,_buf_size,MSG_PEEK);\n      /*Either saw some data or the connection was closed.*/\n      if(ret>=0)return ret;\n      err=op_errno();\n      if(err!=EAGAIN&&err!=EWOULDBLOCK)return 0;\n      fd.events=POLLIN;\n    }\n    /*Need to wait to get any data at all.*/\n    if(poll(&fd,1,OP_POLL_TIMEOUT_MS)<=0)return 0;\n  }\n}\n\n/*When parsing response headers, RFC 2616 mandates that all lines end in CR LF.\n  However, even in the year 2012, I have seen broken servers use just a LF.\n  This is the evil that Postel's advice from RFC 761 breeds.*/\n\n/*Reads the entirety of a response to an HTTP request into the response buffer.\n  Actual parsing and validation is done later.\n  Return: The number of bytes in the response on success, OP_EREAD if the\n           connection was closed before reading any data, or another negative\n           value on any other error.*/\nstatic int op_http_conn_read_response(OpusHTTPConn *_conn,\n OpusStringBuf *_response){\n  int ret;\n  _response->nbuf=0;\n  ret=op_sb_ensure_capacity(_response,OP_RESPONSE_SIZE_MIN);\n  if(OP_UNLIKELY(ret<0))return ret;\n  for(;;){\n    char *buf;\n    int   size;\n    int   capacity;\n    int   read_limit;\n    int   terminated;\n    size=_response->nbuf;\n    capacity=_response->cbuf-1;\n    if(OP_UNLIKELY(size>=capacity)){\n      ret=op_sb_grow(_response,OP_RESPONSE_SIZE_MAX);\n      if(OP_UNLIKELY(ret<0))return ret;\n      capacity=_response->cbuf-1;\n      /*The response was too large.\n        This prevents a bad server from running us out of memory.*/\n      if(OP_UNLIKELY(size>=capacity))return OP_EIMPL;\n    }\n    buf=_response->buf;\n    ret=op_http_conn_peek(_conn,buf+size,capacity-size);\n    if(OP_UNLIKELY(ret<=0))return size<=0?OP_EREAD:OP_FALSE;\n    /*We read some data.*/\n    /*Make sure the starting characters are \"HTTP\".\n      Otherwise we could wind up waiting for a response from something that is\n       not an HTTP server until we time out.*/\n    if(size<4&&op_strncasecmp(buf,\"HTTP\",OP_MIN(size+ret,4))!=0){\n      return OP_FALSE;\n    }\n    /*How far can we read without passing the \"\\r\\n\\r\\n\" terminator?*/\n    buf[size+ret]='\\0';\n    terminated=0;\n    for(read_limit=OP_MAX(size-3,0);read_limit<size+ret;read_limit++){\n      /*We don't look for the leading '\\r' thanks to broken servers.*/\n      if(buf[read_limit]=='\\n'){\n        if(buf[read_limit+1]=='\\r'&&OP_LIKELY(buf[read_limit+2]=='\\n')){\n          terminated=3;\n          break;\n        }\n        /*This case is for broken servers.*/\n        else if(OP_UNLIKELY(buf[read_limit+1]=='\\n')){\n          terminated=2;\n          break;\n        }\n      }\n    }\n    read_limit+=terminated;\n    OP_ASSERT(size<=read_limit);\n    OP_ASSERT(read_limit<=size+ret);\n    /*Actually consume that data.*/\n    ret=op_http_conn_read(_conn,buf+size,read_limit-size,1);\n    if(OP_UNLIKELY(ret<=0))return OP_FALSE;\n    size+=ret;\n    buf[size]='\\0';\n    _response->nbuf=size;\n    /*We found the terminator and read all the data up to and including it.*/\n    if(terminated&&OP_LIKELY(size>=read_limit))return size;\n  }\n  return OP_EIMPL;\n}\n\n# define OP_HTTP_DIGIT \"0123456789\"\n\n/*The Reason-Phrase is not allowed to contain control characters, except\n   horizontal tab (HT: \\011).*/\n# define OP_HTTP_CREASON_PHRASE \\\n \"\\001\\002\\003\\004\\005\\006\\007\\010\\012\\013\\014\\015\\016\\017\\020\\021\" \\\n \"\\022\\023\\024\\025\\026\\027\\030\\031\\032\\033\\034\\035\\036\\037\\177\"\n\n# define OP_HTTP_CTLS \\\n \"\\001\\002\\003\\004\\005\\006\\007\\010\\011\\012\\013\\014\\015\\016\\017\\020\" \\\n \"\\021\\022\\023\\024\\025\\026\\027\\030\\031\\032\\033\\034\\035\\036\\037\\177\"\n\n/*This also includes '\\t', but we get that from OP_HTTP_CTLS.*/\n# define OP_HTTP_SEPARATORS \" \\\"(),/:;<=>?@[\\\\]{}\"\n\n/*TEXT can also include LWS, but that has structure, so we parse it\n   separately.*/\n# define OP_HTTP_CTOKEN OP_HTTP_CTLS OP_HTTP_SEPARATORS\n\n/*Return: The amount of linear white space (LWS) at the start of _s.*/\nstatic int op_http_lwsspn(const char *_s){\n  int i;\n  for(i=0;;){\n    if(_s[0]=='\\r'&&_s[1]=='\\n'&&(_s[2]=='\\t'||_s[2]==' '))i+=3;\n    /*This case is for broken servers.*/\n    else if(_s[0]=='\\n'&&(_s[1]=='\\t'||_s[1]==' '))i+=2;\n    else if(_s[i]=='\\t'||_s[i]==' ')i++;\n    else return i;\n  }\n}\n\nstatic char *op_http_parse_status_line(int *_v1_1_compat,\n char **_status_code,char *_response){\n  char   *next;\n  char   *status_code;\n  int     v1_1_compat;\n  size_t  d;\n  /*RFC 2616 Section 6.1 does not say if the tokens in the Status-Line can be\n     separated by optional LWS, but since it specifically calls out where\n     spaces are to be placed and that CR and LF are not allowed except at the\n     end, we are assuming extra LWS is not allowed.*/\n  /*We already validated that this starts with \"HTTP\"*/\n  OP_ASSERT(op_strncasecmp(_response,\"HTTP\",4)==0);\n  next=_response+4;\n  if(OP_UNLIKELY(*next++!='/'))return NULL;\n  d=strspn(next,OP_HTTP_DIGIT);\n  /*\"Leading zeros MUST be ignored by recipients.\"*/\n  while(*next=='0'){\n    next++;\n    OP_ASSERT(d>0);\n    d--;\n  }\n  /*We only support version 1.x*/\n  if(OP_UNLIKELY(d!=1)||OP_UNLIKELY(*next++!='1'))return NULL;\n  if(OP_UNLIKELY(*next++!='.'))return NULL;\n  d=strspn(next,OP_HTTP_DIGIT);\n  if(OP_UNLIKELY(d<=0))return NULL;\n  /*\"Leading zeros MUST be ignored by recipients.\"*/\n  while(*next=='0'){\n    next++;\n    OP_ASSERT(d>0);\n    d--;\n  }\n  /*We don't need to parse the version number.\n    Any non-zero digit means it's at least 1.*/\n  v1_1_compat=d>0;\n  next+=d;\n  if(OP_UNLIKELY(*next++!=' '))return NULL;\n  status_code=next;\n  d=strspn(next,OP_HTTP_DIGIT);\n  if(OP_UNLIKELY(d!=3))return NULL;\n  next+=d;\n  /*The Reason-Phrase can be empty, but the space must be here.*/\n  if(OP_UNLIKELY(*next++!=' '))return NULL;\n  next+=strcspn(next,OP_HTTP_CREASON_PHRASE);\n  /*We are not mandating this be present thanks to broken servers.*/\n  if(OP_LIKELY(*next=='\\r'))next++;\n  if(OP_UNLIKELY(*next++!='\\n'))return NULL;\n  if(_v1_1_compat!=NULL)*_v1_1_compat=v1_1_compat;\n  *_status_code=status_code;\n  return next;\n}\n\n/*Get the next response header.\n  [out] _header: The header token, NUL-terminated, with leading and trailing\n                  whitespace stripped, and converted to lower case (to simplify\n                  case-insensitive comparisons), or NULL if there are no more\n                  response headers.\n  [out] _cdr:    The remaining contents of the header, excluding the initial\n                  colon (':') and the terminating CRLF (\"\\r\\n\"),\n                  NUL-terminated, and with leading and trailing whitespace\n                  stripped, or NULL if there are no more response headers.\n  [inout] _s:    On input, this points to the start of the current line of the\n                  response headers.\n                 On output, it points to the start of the first line following\n                  this header, or NULL if there are no more response headers.\n  Return: 0 on success, or a negative value on failure.*/\nstatic int op_http_get_next_header(char **_header,char **_cdr,char **_s){\n  char   *header;\n  char   *header_end;\n  char   *cdr;\n  char   *cdr_end;\n  char   *next;\n  size_t  d;\n  next=*_s;\n  /*The second case is for broken servers.*/\n  if(next[0]=='\\r'&&next[1]=='\\n'||OP_UNLIKELY(next[0]=='\\n')){\n    /*No more headers.*/\n    *_header=NULL;\n    *_cdr=NULL;\n    *_s=NULL;\n    return 0;\n  }\n  header=next+op_http_lwsspn(next);\n  d=strcspn(header,OP_HTTP_CTOKEN);\n  if(OP_UNLIKELY(d<=0))return OP_FALSE;\n  header_end=header+d;\n  next=header_end+op_http_lwsspn(header_end);\n  if(OP_UNLIKELY(*next++!=':'))return OP_FALSE;\n  next+=op_http_lwsspn(next);\n  cdr=next;\n  do{\n    cdr_end=next+strcspn(next,OP_HTTP_CTLS);\n    next=cdr_end+op_http_lwsspn(cdr_end);\n  }\n  while(next>cdr_end);\n  /*We are not mandating this be present thanks to broken servers.*/\n  if(OP_LIKELY(*next=='\\r'))next++;\n  if(OP_UNLIKELY(*next++!='\\n'))return OP_FALSE;\n  *header_end='\\0';\n  *cdr_end='\\0';\n  /*Field names are case-insensitive.*/\n  op_string_tolower(header);\n  *_header=header;\n  *_cdr=cdr;\n  *_s=next;\n  return 0;\n}\n\nstatic opus_int64 op_http_parse_nonnegative_int64(const char **_next,\n const char *_cdr){\n  const char *next;\n  opus_int64  ret;\n  int         i;\n  next=_cdr+strspn(_cdr,OP_HTTP_DIGIT);\n  *_next=next;\n  if(OP_UNLIKELY(next<=_cdr))return OP_FALSE;\n  while(*_cdr=='0')_cdr++;\n  if(OP_UNLIKELY(next-_cdr>19))return OP_EIMPL;\n  ret=0;\n  for(i=0;i<next-_cdr;i++){\n    int digit;\n    digit=_cdr[i]-'0';\n    /*Check for overflow.*/\n    if(OP_UNLIKELY(ret>(OP_INT64_MAX-9)/10+(digit<=7)))return OP_EIMPL;\n    ret=ret*10+digit;\n  }\n  return ret;\n}\n\nstatic opus_int64 op_http_parse_content_length(const char *_cdr){\n  const char *next;\n  opus_int64  content_length;\n  content_length=op_http_parse_nonnegative_int64(&next,_cdr);\n  if(OP_UNLIKELY(*next!='\\0'))return OP_FALSE;\n  return content_length;\n}\n\nstatic int op_http_parse_content_range(opus_int64 *_first,opus_int64 *_last,\n opus_int64 *_length,const char *_cdr){\n  opus_int64 first;\n  opus_int64 last;\n  opus_int64 length;\n  size_t     d;\n  if(OP_UNLIKELY(op_strncasecmp(_cdr,\"bytes\",5)!=0))return OP_FALSE;\n  _cdr+=5;\n  d=op_http_lwsspn(_cdr);\n  if(OP_UNLIKELY(d<=0))return OP_FALSE;\n  _cdr+=d;\n  if(*_cdr!='*'){\n    first=op_http_parse_nonnegative_int64(&_cdr,_cdr);\n    if(OP_UNLIKELY(first<0))return (int)first;\n    _cdr+=op_http_lwsspn(_cdr);\n    if(*_cdr++!='-')return OP_FALSE;\n    _cdr+=op_http_lwsspn(_cdr);\n    last=op_http_parse_nonnegative_int64(&_cdr,_cdr);\n    if(OP_UNLIKELY(last<0))return (int)last;\n    _cdr+=op_http_lwsspn(_cdr);\n  }\n  else{\n    /*This is for a 416 response (Requested range not satisfiable).*/\n    first=last=-1;\n    _cdr++;\n  }\n  if(OP_UNLIKELY(*_cdr++!='/'))return OP_FALSE;\n  if(*_cdr!='*'){\n    length=op_http_parse_nonnegative_int64(&_cdr,_cdr);\n    if(OP_UNLIKELY(length<0))return (int)length;\n  }\n  else{\n    /*The total length is unspecified.*/\n    _cdr++;\n    length=-1;\n  }\n  if(OP_UNLIKELY(*_cdr!='\\0'))return OP_FALSE;\n  if(OP_UNLIKELY(last<first))return OP_FALSE;\n  if(length>=0&&OP_UNLIKELY(last>=length))return OP_FALSE;\n  *_first=first;\n  *_last=last;\n  *_length=length;\n  return 0;\n}\n\n/*Parse the Connection response header and look for a \"close\" token.\n  Return: 1 if a \"close\" token is found, 0 if it's not found, and a negative\n           value on error.*/\nstatic int op_http_parse_connection(char *_cdr){\n  size_t d;\n  int    ret;\n  ret=0;\n  for(;;){\n    d=strcspn(_cdr,OP_HTTP_CTOKEN);\n    if(OP_UNLIKELY(d<=0))return OP_FALSE;\n    if(op_strncasecmp(_cdr,\"close\",(int)d)==0)ret=1;\n    /*We're supposed to strip and ignore any headers mentioned in the\n       Connection header if this response is from an HTTP/1.0 server (to\n       work around forwarding of hop-by-hop headers by old proxies), but the\n       only hop-by-hop header we look at is Connection itself.\n      Everything else is a well-defined end-to-end header, and going back and\n       undoing the things we did based on already-examined headers would be\n       hard (since we only scan them once, in a destructive manner).\n      Therefore we just ignore all the other tokens.*/\n    _cdr+=d;\n    d=op_http_lwsspn(_cdr);\n    if(d<=0)break;\n    _cdr+=d;\n  }\n  return OP_UNLIKELY(*_cdr!='\\0')?OP_FALSE:ret;\n}\n\ntypedef int (*op_ssl_step_func)(SSL *_ssl_conn);\n\n/*Try to run an SSL function to completion (blocking if necessary).*/\nstatic int op_do_ssl_step(SSL *_ssl_conn,op_sock _fd,op_ssl_step_func _step){\n  struct pollfd fd;\n  fd.fd=_fd;\n  for(;;){\n    int ret;\n    int err;\n    ret=(*_step)(_ssl_conn);\n    if(ret>=0)return ret;\n    err=SSL_get_error(_ssl_conn,ret);\n    if(err==SSL_ERROR_WANT_READ)fd.events=POLLIN;\n    else if(err==SSL_ERROR_WANT_WRITE)fd.events=POLLOUT;\n    else return OP_FALSE;\n    if(poll(&fd,1,OP_POLL_TIMEOUT_MS)<=0)return OP_FALSE;\n  }\n}\n\n/*Implement a BIO type that just indicates every operation should be retried.\n  We use this when initializing an SSL connection via a proxy to allow the\n   initial handshake to proceed all the way up to the first read attempt, and\n   then return.\n  This allows the TLS client hello message to be pipelined with the HTTP\n   CONNECT request.*/\n\nstatic int op_bio_retry_write(BIO *_b,const char *_buf,int _num){\n  (void)_buf;\n  (void)_num;\n  BIO_clear_retry_flags(_b);\n  BIO_set_retry_write(_b);\n  return -1;\n}\n\nstatic int op_bio_retry_read(BIO *_b,char *_buf,int _num){\n  (void)_buf;\n  (void)_num;\n  BIO_clear_retry_flags(_b);\n  BIO_set_retry_read(_b);\n  return -1;\n}\n\nstatic int op_bio_retry_puts(BIO *_b,const char *_str){\n  return op_bio_retry_write(_b,_str,0);\n}\n\nstatic long op_bio_retry_ctrl(BIO *_b,int _cmd,long _num,void *_ptr){\n  long ret;\n  (void)_b;\n  (void)_num;\n  (void)_ptr;\n  ret=0;\n  switch(_cmd){\n    case BIO_CTRL_RESET:\n    case BIO_C_RESET_READ_REQUEST:{\n      BIO_clear_retry_flags(_b);\n      /*Fall through.*/\n    }\n    case BIO_CTRL_EOF:\n    case BIO_CTRL_SET:\n    case BIO_CTRL_SET_CLOSE:\n    case BIO_CTRL_FLUSH:\n    case BIO_CTRL_DUP:{\n      ret=1;\n    }break;\n  }\n  return ret;\n}\n\n# if OPENSSL_VERSION_NUMBER<0x10100000L\n#  define BIO_set_data(_b,_ptr) ((_b)->ptr=(_ptr))\n#  define BIO_set_init(_b,_init) ((_b)->init=(_init))\n#  define ASN1_STRING_get0_data ASN1_STRING_data\n# endif\n\nstatic int op_bio_retry_new(BIO *_b){\n  BIO_set_init(_b,1);\n# if OPENSSL_VERSION_NUMBER<0x10100000L\n  _b->num=0;\n# endif\n  BIO_set_data(_b,NULL);\n  return 1;\n}\n\nstatic int op_bio_retry_free(BIO *_b){\n  return _b!=NULL;\n}\n\n# if OPENSSL_VERSION_NUMBER<0x10100000L\n/*This is not const because OpenSSL doesn't allow it, even though it won't\n   write to it.*/\nstatic BIO_METHOD op_bio_retry_method={\n  BIO_TYPE_NULL,\n  \"retry\",\n  op_bio_retry_write,\n  op_bio_retry_read,\n  op_bio_retry_puts,\n  NULL,\n  op_bio_retry_ctrl,\n  op_bio_retry_new,\n  op_bio_retry_free,\n  NULL\n};\n# endif\n\n/*Establish a CONNECT tunnel and pipeline the start of the TLS handshake for\n   proxying https URL requests.*/\nstatic int op_http_conn_establish_tunnel(OpusHTTPStream *_stream,\n OpusHTTPConn *_conn,op_sock _fd,SSL *_ssl_conn,BIO *_ssl_bio){\n# if OPENSSL_VERSION_NUMBER>=0x10100000L\n  BIO_METHOD *bio_retry_method;\n# endif\n  BIO  *retry_bio;\n  char *status_code;\n  char *next;\n  int   ret;\n  _conn->ssl_conn=NULL;\n  _conn->fd=_fd;\n  OP_ASSERT(_stream->proxy_connect.nbuf>0);\n  ret=op_http_conn_write_fully(_conn,\n   _stream->proxy_connect.buf,_stream->proxy_connect.nbuf);\n  if(OP_UNLIKELY(ret<0))return ret;\n# if OPENSSL_VERSION_NUMBER>=0x10100000L\n  bio_retry_method=BIO_meth_new(BIO_TYPE_NULL,\"retry\");\n  if(bio_retry_method==NULL)return OP_EFAULT;\n  BIO_meth_set_write(bio_retry_method,op_bio_retry_write);\n  BIO_meth_set_read(bio_retry_method,op_bio_retry_read);\n  BIO_meth_set_puts(bio_retry_method,op_bio_retry_puts);\n  BIO_meth_set_ctrl(bio_retry_method,op_bio_retry_ctrl);\n  BIO_meth_set_create(bio_retry_method,op_bio_retry_new);\n  BIO_meth_set_destroy(bio_retry_method,op_bio_retry_free);\n  retry_bio=BIO_new(bio_retry_method);\n  if(OP_UNLIKELY(retry_bio==NULL)){\n    BIO_meth_free(bio_retry_method);\n    return OP_EFAULT;\n  }\n# else\n  retry_bio=BIO_new(&op_bio_retry_method);\n  if(OP_UNLIKELY(retry_bio==NULL))return OP_EFAULT;\n# endif\n  SSL_set_bio(_ssl_conn,retry_bio,_ssl_bio);\n  SSL_set_connect_state(_ssl_conn);\n  /*This shouldn't succeed, since we can't read yet.*/\n  OP_ALWAYS_TRUE(SSL_connect(_ssl_conn)<0);\n  SSL_set_bio(_ssl_conn,_ssl_bio,_ssl_bio);\n# if OPENSSL_VERSION_NUMBER>=0x10100000L\n  BIO_meth_free(bio_retry_method);\n# endif\n  /*Only now do we disable write coalescing, to allow the CONNECT\n     request and the start of the TLS handshake to be combined.*/\n  op_sock_set_tcp_nodelay(_fd,1);\n  ret=op_http_conn_read_response(_conn,&_stream->response);\n  if(OP_UNLIKELY(ret<0))return ret;\n  next=op_http_parse_status_line(NULL,&status_code,_stream->response.buf);\n  /*According to RFC 2817, \"Any successful (2xx) response to a\n     CONNECT request indicates that the proxy has established a\n     connection to the requested host and port.\"*/\n  if(OP_UNLIKELY(next==NULL)||OP_UNLIKELY(status_code[0]!='2'))return OP_FALSE;\n  return 0;\n}\n\n/*Convert a host to a numeric address, if possible.\n  Return: A struct addrinfo containing the address, if it was numeric, and NULL\n           otherwise.*/\nstatic struct addrinfo *op_inet_pton(const char *_host){\n  struct addrinfo *addrs;\n  struct addrinfo  hints;\n  memset(&hints,0,sizeof(hints));\n  hints.ai_socktype=SOCK_STREAM;\n  hints.ai_flags=AI_NUMERICHOST;\n  if(!getaddrinfo(_host,NULL,&hints,&addrs))return addrs;\n  return NULL;\n}\n\n# if OPENSSL_VERSION_NUMBER<0x10002000L\n/*Match a host name against a host with a possible wildcard pattern according\n   to the rules of RFC 6125 Section 6.4.3.\n  Return: 0 if the pattern doesn't match, and a non-zero value if it does.*/\nstatic int op_http_hostname_match(const char *_host,size_t _host_len,\n ASN1_STRING *_pattern){\n  const char *pattern;\n  size_t      host_label_len;\n  size_t      host_suffix_len;\n  size_t      pattern_len;\n  size_t      pattern_label_len;\n  size_t      pattern_prefix_len;\n  size_t      pattern_suffix_len;\n  if(OP_UNLIKELY(_host_len>(size_t)INT_MAX))return 0;\n  pattern=(const char *)ASN1_STRING_get0_data(_pattern);\n  pattern_len=strlen(pattern);\n  /*Check the pattern for embedded NULs.*/\n  if(OP_UNLIKELY(pattern_len!=(size_t)ASN1_STRING_length(_pattern)))return 0;\n  pattern_label_len=strcspn(pattern,\".\");\n  OP_ASSERT(pattern_label_len<=pattern_len);\n  pattern_prefix_len=strcspn(pattern,\"*\");\n  if(OP_UNLIKELY(pattern_prefix_len>(size_t)INT_MAX))return 0;\n  if(pattern_prefix_len>=pattern_label_len){\n    /*\"The client SHOULD NOT attempt to match a presented identifier in which\n       the wildcard character comprises a label other than the left-most label\n       (e.g., do not match bar.*.example.net).\" [RFC 6125 Section 6.4.3]*/\n    if(pattern_prefix_len<pattern_len)return 0;\n    /*If the pattern does not contain a wildcard in the first element, do an\n       exact match.\n      Don't use the system strcasecmp here, as that uses the locale and\n       RFC 4343 makes clear that DNS's case-insensitivity only applies to\n       the ASCII range.*/\n    return _host_len==pattern_len\n     &&op_strncasecmp(_host,pattern,(int)_host_len)==0;\n  }\n  /*\"However, the client SHOULD NOT attempt to match a presented identifier\n     where the wildcard character is embedded within an A-label or U-label of\n     an internationalized domain name.\" [RFC 6125 Section 6.4.3]*/\n  if(op_strncasecmp(pattern,\"xn--\",4)==0)return 0;\n  host_label_len=strcspn(_host,\".\");\n  /*Make sure the host has at least two dots, to prevent the wildcard match\n     from being ridiculously wide.\n    We should have already checked to ensure it had at least one.*/\n  if(OP_UNLIKELY(_host[host_label_len]!='.')\n   ||strchr(_host+host_label_len+1,'.')==NULL){\n    return 0;\n  }\n  OP_ASSERT(host_label_len<_host_len);\n  /*\"If the wildcard character is the only character of the left-most label in\n     the presented identifier, the client SHOULD NOT compare against anything\n     but the left-most label of the reference identifier (e.g., *.example.com\n     would match foo.example.com but not bar.foo.example.com).\" [RFC 6125\n     Section 6.4.3]\n    This is really confusingly worded, as we check this by actually comparing\n     the rest of the pattern for an exact match.\n    We also use the fact that the wildcard must match at least one character,\n     so the left-most label of the hostname must be at least as large as the\n     left-most label of the pattern.*/\n  if(host_label_len<pattern_label_len)return 0;\n  OP_ASSERT(pattern[pattern_prefix_len]=='*');\n  /*\"The client MAY match a presented identifier in which the wildcard\n     character is not the only character of the label (e.g., baz*.example.net\n     and *baz.example.net and b*z.example.net would be taken to match\n     baz1.example.net and foobaz.example.net and buzz.example.net,\n     respectively).\" [RFC 6125 Section 6.4.3]*/\n  pattern_suffix_len=pattern_len-pattern_prefix_len-1;\n  host_suffix_len=_host_len-host_label_len\n   +pattern_label_len-pattern_prefix_len-1;\n  OP_ASSERT(host_suffix_len<=_host_len);\n  return pattern_suffix_len==host_suffix_len\n   &&op_strncasecmp(_host,pattern,(int)pattern_prefix_len)==0\n   &&op_strncasecmp(_host+_host_len-host_suffix_len,\n   pattern+pattern_prefix_len+1,(int)host_suffix_len)==0;\n}\n\n/*Verify the server's hostname matches the certificate they presented using\n   the procedure from Section 6 of RFC 6125.\n  Return: 0 if the certificate doesn't match, and a non-zero value if it does.*/\nstatic int op_http_verify_hostname(OpusHTTPStream *_stream,SSL *_ssl_conn){\n  X509            *peer_cert;\n  struct addrinfo *addr;\n  char            *host;\n  size_t           host_len;\n  unsigned char   *ip;\n  int              ip_len;\n  int              check_cn;\n  int              ret;\n  host=_stream->url.host;\n  host_len=strlen(host);\n  peer_cert=SSL_get_peer_certificate(_ssl_conn);\n  /*We set VERIFY_PEER, so we shouldn't get here without a certificate.*/\n  if(OP_UNLIKELY(peer_cert==NULL))return 0;\n  ret=0;\n  OP_ASSERT(host_len<INT_MAX);\n  /*By default, fall back to checking the Common Name if we don't check any\n     subjectAltNames of type dNSName.*/\n  check_cn=1;\n  /*Check to see if the host was specified as a simple IP address.*/\n  addr=op_inet_pton(host);\n  ip=NULL;\n  ip_len=0;\n  if(addr!=NULL){\n    switch(addr->ai_family){\n      case AF_INET:{\n        struct sockaddr_in *s;\n        s=(struct sockaddr_in *)addr->ai_addr;\n        OP_ASSERT(addr->ai_addrlen>=sizeof(*s));\n        ip=(unsigned char *)&s->sin_addr;\n        ip_len=sizeof(s->sin_addr);\n        /*RFC 6125 says, \"In this case, the iPAddress subjectAltName must [sic]\n           be present in the certificate and must [sic] exactly match the IP in\n           the URI.\"\n          So don't allow falling back to a Common Name.*/\n        check_cn=0;\n      }break;\n      case AF_INET6:{\n        struct sockaddr_in6 *s;\n        s=(struct sockaddr_in6 *)addr->ai_addr;\n        OP_ASSERT(addr->ai_addrlen>=sizeof(*s));\n        ip=(unsigned char *)&s->sin6_addr;\n        ip_len=sizeof(s->sin6_addr);\n        check_cn=0;\n      }break;\n    }\n  }\n  /*We can only verify IP addresses and \"fully-qualified\" domain names.\n    To quote RFC 6125: \"The extracted data MUST include only information that\n     can be securely parsed out of the inputs (e.g., parsing the fully\n     qualified DNS domain name out of the \"host\" component (or its\n     equivalent) of a URI or deriving the application service type from the\n     scheme of a URI) ...\"\n    We don't have a way to check (without relying on DNS records, which might\n     be subverted) if this address is fully-qualified.\n    This is particularly problematic when using a CONNECT tunnel, as it is\n     the server that does DNS lookup, not us.\n    However, we are certain that if the hostname has no '.', it is definitely\n     not a fully-qualified domain name (with the exception of crazy TLDs that\n     actually resolve, like \"uz\", but I am willing to ignore those).\n    RFC 1535 says \"...in any event where a '.' exists in a specified name it\n     should be assumed to be a fully qualified domain name (FQDN) and SHOULD\n     be tried as a rooted name first.\"\n    That doesn't give us any security guarantees, of course (a subverted DNS\n     could fail the original query and our resolver might still retry with a\n     local domain appended).*/\n  if(ip!=NULL||strchr(host,'.')!=NULL){\n    STACK_OF(GENERAL_NAME) *san_names;\n    /*RFC 2818 says (after correcting for Eratta 1077): \"If a subjectAltName\n       extension of type dNSName is present, that MUST be used as the identity.\n      Otherwise, the (most specific) Common Name field in the Subject field of\n       the certificate MUST be used.\n      Although the use of the Common Name is existing practice, it is\n       deprecated and Certification Authorities are encouraged to use the\n       dNSName instead.\"\n      \"Matching is performed using the matching rules specified by RFC 2459.\n      If more than one identity of a given type is present in the certificate\n       (e.g., more than one dNSName name), a match in any one of the set is\n       considered acceptable.\n      Names may contain the wildcard character * which is condered to match any\n       single domain name component or component fragment.\n      E.g., *.a.com matches foo.a.com but not bar.foo.a.com.\n      f*.com matches foo.com but not bar.com.\"\n      \"In some cases, the URI is specified as an IP address rather than a\n       hostname.\n      In this case, the iPAddress subjectAltName must be present in the\n       certificate and must exactly match the IP in the URI.\"*/\n    san_names=X509_get_ext_d2i(peer_cert,NID_subject_alt_name,NULL,NULL);\n    if(san_names!=NULL){\n      int nsan_names;\n      int sni;\n      /*RFC 2459 says there MUST be at least one, but we don't depend on it.*/\n      nsan_names=sk_GENERAL_NAME_num(san_names);\n      for(sni=0;sni<nsan_names;sni++){\n        const GENERAL_NAME *name;\n        name=sk_GENERAL_NAME_value(san_names,sni);\n        if(ip==NULL){\n          if(name->type==GEN_DNS){\n            /*We have a subjectAltName extension of type dNSName, so don't fall\n               back to a Common Name.\n              https://marc.info/?l=openssl-dev&m=139617145216047&w=2 says that\n               subjectAltNames of other types do not trigger this restriction,\n               (e.g., if they are all IP addresses, we will still check a\n               non-IP hostname against a Common Name).*/\n            check_cn=0;\n            if(op_http_hostname_match(host,host_len,name->d.dNSName)){\n              ret=1;\n              break;\n            }\n          }\n        }\n        else if(name->type==GEN_IPADD){\n          unsigned const char *cert_ip;\n          /*If we do have an IP address, compare it directly.\n            RFC 6125: \"When the reference identity is an IP address, the\n             identity MUST be converted to the 'network byte order' octet\n             string representation.\n            For IP Version 4, as specified in RFC 791, the octet string will\n             contain exactly four octets.\n            For IP Version 6, as specified in RFC 2460, the octet string will\n             contain exactly sixteen octets.\n            This octet string is then compared against subjectAltName values of\n             type iPAddress.\n            A match occurs if the reference identity octet string and the value\n             octet strings are identical.\"*/\n          cert_ip=ASN1_STRING_get0_data(name->d.iPAddress);\n          if(ip_len==ASN1_STRING_length(name->d.iPAddress)\n           &&memcmp(ip,cert_ip,ip_len)==0){\n            ret=1;\n            break;\n          }\n        }\n      }\n      sk_GENERAL_NAME_pop_free(san_names,GENERAL_NAME_free);\n    }\n    /*If we're supposed to fall back to a Common Name, match against it here.*/\n    if(check_cn){\n      int last_cn_loc;\n      int cn_loc;\n      /*RFC 6125 says that at least one significant CA is known to issue certs\n         with multiple CNs, although it SHOULD NOT.\n        It also says: \"The server's identity may also be verified by comparing\n         the reference identity to the Common Name (CN) value in the last\n         Relative Distinguished Name (RDN) of the subject field of the server's\n         certificate (where \"last\" refers to the DER-encoded order...).\"\n        So find the last one and check it.*/\n      cn_loc=-1;\n      do{\n        last_cn_loc=cn_loc;\n        cn_loc=X509_NAME_get_index_by_NID(X509_get_subject_name(peer_cert),\n         NID_commonName,last_cn_loc);\n      }\n      while(cn_loc>=0);\n      ret=last_cn_loc>=0\n       &&op_http_hostname_match(host,host_len,\n       X509_NAME_ENTRY_get_data(\n       X509_NAME_get_entry(X509_get_subject_name(peer_cert),last_cn_loc)));\n    }\n  }\n  if(addr!=NULL)freeaddrinfo(addr);\n  X509_free(peer_cert);\n  return ret;\n}\n# endif\n\n/*Perform the TLS handshake on a new connection.*/\nstatic int op_http_conn_start_tls(OpusHTTPStream *_stream,OpusHTTPConn *_conn,\n op_sock _fd,SSL *_ssl_conn){\n  SSL_SESSION *ssl_session;\n  BIO         *ssl_bio;\n  int          skip_certificate_check;\n  int          ret;\n  /*This always takes an int, even though with Winsock op_sock is a SOCKET.*/\n  ssl_bio=BIO_new_socket((int)_fd,BIO_NOCLOSE);\n  if(OP_LIKELY(ssl_bio==NULL))return OP_FALSE;\n# if !defined(OPENSSL_NO_TLSEXT)\n  /*Support for RFC 6066 Server Name Indication.*/\n  SSL_set_tlsext_host_name(_ssl_conn,_stream->url.host);\n# endif\n  skip_certificate_check=_stream->skip_certificate_check;\n# if OPENSSL_VERSION_NUMBER>=0x10002000L\n  /*As of version 1.0.2, OpenSSL can finally do hostname checks automatically.\n    Of course, they make it much more complicated than it needs to be.*/\n  if(!skip_certificate_check){\n    X509_VERIFY_PARAM *param;\n    struct addrinfo   *addr;\n    char              *host;\n    unsigned char     *ip;\n    int                ip_len;\n    param=SSL_get0_param(_ssl_conn);\n    OP_ASSERT(param!=NULL);\n    host=_stream->url.host;\n    ip=NULL;\n    ip_len=0;\n    /*Check to see if the host was specified as a simple IP address.*/\n    addr=op_inet_pton(host);\n    if(addr!=NULL){\n      switch(addr->ai_family){\n        case AF_INET:{\n          struct sockaddr_in *s;\n          s=(struct sockaddr_in *)addr->ai_addr;\n          OP_ASSERT(addr->ai_addrlen>=sizeof(*s));\n          ip=(unsigned char *)&s->sin_addr;\n          ip_len=sizeof(s->sin_addr);\n          host=NULL;\n        }break;\n        case AF_INET6:{\n          struct sockaddr_in6 *s;\n          s=(struct sockaddr_in6 *)addr->ai_addr;\n          OP_ASSERT(addr->ai_addrlen>=sizeof(*s));\n          ip=(unsigned char *)&s->sin6_addr;\n          ip_len=sizeof(s->sin6_addr);\n          host=NULL;\n        }break;\n      }\n    }\n    /*Always set both host and ip to prevent matching against an old one.\n      One of the two will always be NULL, clearing that parameter.*/\n    X509_VERIFY_PARAM_set1_host(param,host,0);\n    X509_VERIFY_PARAM_set1_ip(param,ip,ip_len);\n    if(addr!=NULL)freeaddrinfo(addr);\n  }\n# endif\n  /*Resume a previous session if available.*/\n  if(_stream->ssl_session!=NULL){\n    SSL_set_session(_ssl_conn,_stream->ssl_session);\n  }\n  /*If we're proxying, establish the CONNECT tunnel.*/\n  if(_stream->proxy_connect.nbuf>0){\n    ret=op_http_conn_establish_tunnel(_stream,_conn,\n     _fd,_ssl_conn,ssl_bio);\n    if(OP_UNLIKELY(ret<0))return ret;\n  }\n  else{\n    /*Otherwise, just use this socket directly.*/\n    op_sock_set_tcp_nodelay(_fd,1);\n    SSL_set_bio(_ssl_conn,ssl_bio,ssl_bio);\n    SSL_set_connect_state(_ssl_conn);\n  }\n  ret=op_do_ssl_step(_ssl_conn,_fd,SSL_connect);\n  if(OP_UNLIKELY(ret<=0))return OP_FALSE;\n  ssl_session=_stream->ssl_session;\n  if(ssl_session==NULL\n# if OPENSSL_VERSION_NUMBER<0x10002000L\n   ||!skip_certificate_check\n# endif\n   ){\n    ret=op_do_ssl_step(_ssl_conn,_fd,SSL_do_handshake);\n    if(OP_UNLIKELY(ret<=0))return OP_FALSE;\n# if OPENSSL_VERSION_NUMBER<0x10002000L\n    /*OpenSSL before version 1.0.2 does not do automatic hostname verification,\n       despite the fact that we just passed it the hostname above in the call\n       to SSL_set_tlsext_host_name().\n      Do it for them.*/\n    if(!skip_certificate_check&&!op_http_verify_hostname(_stream,_ssl_conn)){\n      return OP_FALSE;\n    }\n# endif\n    if(ssl_session==NULL){\n      /*Save the session for later resumption.*/\n      _stream->ssl_session=SSL_get1_session(_ssl_conn);\n    }\n  }\n  _conn->ssl_conn=_ssl_conn;\n  _conn->fd=_fd;\n  _conn->nrequests_left=OP_PIPELINE_MAX_REQUESTS;\n  return 0;\n}\n\n/*Try to start a connection to the next address in the given list of a given\n   type.\n  _fd:           The socket to connect with.\n  [inout] _addr: A pointer to the list of addresses.\n                 This will be advanced to the first one that matches the given\n                  address family (possibly the current one).\n  _ai_family:    The address family to connect to.\n  Return: 1        If the connection was successful.\n          0        If the connection is in progress.\n          OP_FALSE If the connection failed and there were no more addresses\n                    left to try.\n                    *_addr will be set to NULL in this case.*/\nstatic int op_sock_connect_next(op_sock _fd,\n struct addrinfo **_addr,int _ai_family){\n  struct addrinfo *addr;\n  int              err;\n  for(addr=*_addr;;addr=addr->ai_next){\n    /*Move to the next address of the requested type.*/\n    for(;addr!=NULL&&addr->ai_family!=_ai_family;addr=addr->ai_next);\n    *_addr=addr;\n    /*No more: failure.*/\n    if(addr==NULL)return OP_FALSE;\n    if(connect(_fd,addr->ai_addr,addr->ai_addrlen)>=0)return 1;\n    err=op_errno();\n    /*Winsock will set WSAEWOULDBLOCK.*/\n    if(OP_LIKELY(err==EINPROGRESS||err==EWOULDBLOCK))return 0;\n  }\n}\n\n/*The number of address families to try connecting to simultaneously.*/\n# define OP_NPROTOS (2)\n\nstatic int op_http_connect_impl(OpusHTTPStream *_stream,OpusHTTPConn *_conn,\n struct addrinfo *_addrs,struct timeb *_start_time){\n  struct addrinfo *addr;\n  struct addrinfo *addrs[OP_NPROTOS];\n  struct pollfd    fds[OP_NPROTOS];\n  int              ai_family;\n  int              nprotos;\n  int              ret;\n  int              pi;\n  int              pj;\n  for(pi=0;pi<OP_NPROTOS;pi++)addrs[pi]=NULL;\n  /*Try connecting via both IPv4 and IPv6 simultaneously, and keep the first\n     one that succeeds.\n    Start by finding the first address from each family.\n    We order the first connection attempts in the same order the address\n     families were returned in the DNS records in accordance with RFC 6555.*/\n  for(addr=_addrs,nprotos=0;addr!=NULL&&nprotos<OP_NPROTOS;addr=addr->ai_next){\n    if(addr->ai_family==AF_INET6||addr->ai_family==AF_INET){\n      OP_ASSERT(addr->ai_addrlen<=\n       OP_MAX(sizeof(struct sockaddr_in6),sizeof(struct sockaddr_in)));\n      /*If we've seen this address family before, skip this address for now.*/\n      for(pi=0;pi<nprotos;pi++)if(addrs[pi]->ai_family==addr->ai_family)break;\n      if(pi<nprotos)continue;\n      addrs[nprotos++]=addr;\n    }\n  }\n  /*Pop the connection off the free list and put it on the LRU list.*/\n  OP_ASSERT(_stream->free_head==_conn);\n  _stream->free_head=_conn->next;\n  _conn->next=_stream->lru_head;\n  _stream->lru_head=_conn;\n  ftime(_start_time);\n  *&_conn->read_time=*_start_time;\n  _conn->read_bytes=0;\n  _conn->read_rate=0;\n  /*Try to start a connection to each protocol.\n    RFC 6555 says it is RECOMMENDED that connection attempts be paced\n     150...250 ms apart \"to balance human factors against network load\", but\n     that \"stateful algorithms\" (that's us) \"are expected to be more\n     aggressive\".\n    We are definitely more aggressive: we don't pace at all.*/\n  for(pi=0;pi<nprotos;pi++){\n    ai_family=addrs[pi]->ai_family;\n    fds[pi].fd=socket(ai_family,SOCK_STREAM,addrs[pi]->ai_protocol);\n    fds[pi].events=POLLOUT;\n    if(OP_LIKELY(fds[pi].fd!=OP_INVALID_SOCKET)){\n      if(OP_LIKELY(op_sock_set_nonblocking(fds[pi].fd,1)>=0)){\n        ret=op_sock_connect_next(fds[pi].fd,addrs+pi,ai_family);\n        if(OP_UNLIKELY(ret>0)){\n          /*It succeeded right away (technically possible), so stop.*/\n          nprotos=pi+1;\n          break;\n        }\n        /*Otherwise go on to the next protocol, and skip the clean-up below.*/\n        else if(ret==0)continue;\n        /*Tried all the addresses for this protocol.*/\n      }\n      /*Clean up the socket.*/\n      close(fds[pi].fd);\n    }\n    /*Remove this protocol from the list.*/\n    memmove(addrs+pi,addrs+pi+1,sizeof(*addrs)*(nprotos-pi-1));\n    nprotos--;\n    pi--;\n  }\n  /*Wait for one of the connections to finish.*/\n  while(pi>=nprotos&&nprotos>0&&poll(fds,nprotos,OP_POLL_TIMEOUT_MS)>0){\n    for(pi=0;pi<nprotos;pi++){\n      socklen_t errlen;\n      int       err;\n      /*Still waiting...*/\n      if(!fds[pi].revents)continue;\n      errlen=sizeof(err);\n      /*Some platforms will return the pending error in &err and return 0.\n        Others will put it in errno and return -1.*/\n      ret=getsockopt(fds[pi].fd,SOL_SOCKET,SO_ERROR,&err,&errlen);\n      if(ret<0)err=op_errno();\n      /*Success!*/\n      if(err==0||err==EISCONN)break;\n      /*Move on to the next address for this protocol.*/\n      ai_family=addrs[pi]->ai_family;\n      addrs[pi]=addrs[pi]->ai_next;\n      ret=op_sock_connect_next(fds[pi].fd,addrs+pi,ai_family);\n      /*It succeeded right away, so stop.*/\n      if(ret>0)break;\n      /*Otherwise go on to the next protocol, and skip the clean-up below.*/\n      else if(ret==0)continue;\n      /*Tried all the addresses for this protocol.\n        Remove it from the list.*/\n      close(fds[pi].fd);\n      memmove(fds+pi,fds+pi+1,sizeof(*fds)*(nprotos-pi-1));\n      memmove(addrs+pi,addrs+pi+1,sizeof(*addrs)*(nprotos-pi-1));\n      nprotos--;\n      pi--;\n    }\n  }\n  /*Close all the other sockets.*/\n  for(pj=0;pj<nprotos;pj++)if(pi!=pj)close(fds[pj].fd);\n  /*If none of them succeeded, we're done.*/\n  if(pi>=nprotos)return OP_FALSE;\n  /*Save this address for future connection attempts.*/\n  if(addrs[pi]!=&_stream->addr_info){\n    memcpy(&_stream->addr_info,addrs[pi],sizeof(_stream->addr_info));\n    _stream->addr_info.ai_addr=&_stream->addr.s;\n    _stream->addr_info.ai_next=NULL;\n    memcpy(&_stream->addr,addrs[pi]->ai_addr,addrs[pi]->ai_addrlen);\n  }\n  if(OP_URL_IS_SSL(&_stream->url)){\n    SSL *ssl_conn;\n    /*Start the SSL connection.*/\n    OP_ASSERT(_stream->ssl_ctx!=NULL);\n    ssl_conn=SSL_new(_stream->ssl_ctx);\n    if(OP_LIKELY(ssl_conn!=NULL)){\n      ret=op_http_conn_start_tls(_stream,_conn,fds[pi].fd,ssl_conn);\n      if(OP_LIKELY(ret>=0))return ret;\n      SSL_free(ssl_conn);\n    }\n    close(fds[pi].fd);\n    _conn->fd=OP_INVALID_SOCKET;\n    return OP_FALSE;\n  }\n  /*Just a normal non-SSL connection.*/\n  _conn->ssl_conn=NULL;\n  _conn->fd=fds[pi].fd;\n  _conn->nrequests_left=OP_PIPELINE_MAX_REQUESTS;\n  /*Disable write coalescing.\n    We always send whole requests at once and always parse the response headers\n     before sending another one.*/\n  op_sock_set_tcp_nodelay(fds[pi].fd,1);\n  return 0;\n}\n\nstatic int op_http_connect(OpusHTTPStream *_stream,OpusHTTPConn *_conn,\n struct addrinfo *_addrs,struct timeb *_start_time){\n  struct timeb     resolve_time;\n  struct addrinfo *new_addrs;\n  int              ret;\n  /*Re-resolve the host if we need to (RFC 6555 says we MUST do so\n     occasionally).*/\n  new_addrs=NULL;\n  ftime(&resolve_time);\n  if(_addrs!=&_stream->addr_info||op_time_diff_ms(&resolve_time,\n   &_stream->resolve_time)>=OP_RESOLVE_CACHE_TIMEOUT_MS){\n    new_addrs=op_resolve(_stream->connect_host,_stream->connect_port);\n    if(OP_LIKELY(new_addrs!=NULL)){\n      _addrs=new_addrs;\n      *&_stream->resolve_time=*&resolve_time;\n    }\n    else if(OP_LIKELY(_addrs==NULL))return OP_FALSE;\n  }\n  ret=op_http_connect_impl(_stream,_conn,_addrs,_start_time);\n  if(new_addrs!=NULL)freeaddrinfo(new_addrs);\n  return ret;\n}\n\n# define OP_BASE64_LENGTH(_len) (((_len)+2)/3*4)\n\nstatic const char BASE64_TABLE[64]={\n  'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P',\n  'Q','R','S','T','U','V','W','X','Y','Z','a','b','c','d','e','f',\n  'g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v',\n  'w','x','y','z','0','1','2','3','4','5','6','7','8','9','+','/'\n};\n\nstatic char *op_base64_encode(char *_dst,const char *_src,int _len){\n  unsigned s0;\n  unsigned s1;\n  unsigned s2;\n  int      ngroups;\n  int      i;\n  ngroups=_len/3;\n  for(i=0;i<ngroups;i++){\n    s0=_src[3*i+0];\n    s1=_src[3*i+1];\n    s2=_src[3*i+2];\n    _dst[4*i+0]=BASE64_TABLE[s0>>2];\n    _dst[4*i+1]=BASE64_TABLE[(s0&3)<<4|s1>>4];\n    _dst[4*i+2]=BASE64_TABLE[(s1&15)<<2|s2>>6];\n    _dst[4*i+3]=BASE64_TABLE[s2&63];\n  }\n  _len-=3*i;\n  if(_len==1){\n    s0=_src[3*i+0];\n    _dst[4*i+0]=BASE64_TABLE[s0>>2];\n    _dst[4*i+1]=BASE64_TABLE[(s0&3)<<4];\n    _dst[4*i+2]='=';\n    _dst[4*i+3]='=';\n    i++;\n  }\n  else if(_len==2){\n    s0=_src[3*i+0];\n    s1=_src[3*i+1];\n    _dst[4*i+0]=BASE64_TABLE[s0>>2];\n    _dst[4*i+1]=BASE64_TABLE[(s0&3)<<4|s1>>4];\n    _dst[4*i+2]=BASE64_TABLE[(s1&15)<<2];\n    _dst[4*i+3]='=';\n    i++;\n  }\n  _dst[4*i]='\\0';\n  return _dst+4*i;\n}\n\n/*Construct an HTTP authorization header using RFC 2617's Basic Authentication\n   Scheme and append it to the given string buffer.*/\nstatic int op_sb_append_basic_auth_header(OpusStringBuf *_sb,\n const char *_header,const char *_user,const char *_pass){\n  size_t user_len;\n  size_t pass_len;\n  int    user_pass_len;\n  int    base64_len;\n  int    nbuf_total;\n  int    ret;\n  ret=op_sb_append_string(_sb,_header);\n  ret|=op_sb_append(_sb,\": Basic \",8);\n  user_len=strlen(_user);\n  pass_len=strlen(_pass);\n  if(OP_UNLIKELY(user_len>(size_t)INT_MAX))return OP_EFAULT;\n  if(OP_UNLIKELY(pass_len>INT_MAX-user_len))return OP_EFAULT;\n  if(OP_UNLIKELY((int)(user_len+pass_len)>(INT_MAX>>2)*3-3))return OP_EFAULT;\n  user_pass_len=(int)(user_len+pass_len)+1;\n  base64_len=OP_BASE64_LENGTH(user_pass_len);\n  /*Stick \"user:pass\" at the end of the buffer so we can Base64 encode it\n     in-place.*/\n  nbuf_total=_sb->nbuf;\n  if(OP_UNLIKELY(base64_len>INT_MAX-nbuf_total))return OP_EFAULT;\n  nbuf_total+=base64_len;\n  ret|=op_sb_ensure_capacity(_sb,nbuf_total);\n  if(OP_UNLIKELY(ret<0))return ret;\n  _sb->nbuf=nbuf_total-user_pass_len;\n  OP_ALWAYS_TRUE(!op_sb_append(_sb,_user,(int)user_len));\n  OP_ALWAYS_TRUE(!op_sb_append(_sb,\":\",1));\n  OP_ALWAYS_TRUE(!op_sb_append(_sb,_pass,(int)pass_len));\n  op_base64_encode(_sb->buf+nbuf_total-base64_len,\n   _sb->buf+nbuf_total-user_pass_len,user_pass_len);\n  return op_sb_append(_sb,\"\\r\\n\",2);\n}\n\nstatic int op_http_allow_pipelining(const char *_server){\n  /*Servers known to do bad things with pipelined requests.\n    This list is taken from Gecko's nsHttpConnection::SupportsPipelining() (in\n     netwerk/protocol/http/nsHttpConnection.cpp).*/\n  static const char *BAD_SERVERS[]={\n    \"EFAServer/\",\n    \"Microsoft-IIS/4.\",\n    \"Microsoft-IIS/5.\",\n    \"Netscape-Enterprise/3.\",\n    \"Netscape-Enterprise/4.\",\n    \"Netscape-Enterprise/5.\",\n    \"Netscape-Enterprise/6.\",\n    \"WebLogic 3.\",\n    \"WebLogic 4.\",\n    \"WebLogic 5.\",\n    \"WebLogic 6.\",\n    \"Winstone Servlet Engine v0.\"\n  };\n# define NBAD_SERVERS ((int)(sizeof(BAD_SERVERS)/sizeof(*BAD_SERVERS)))\n  if(*_server>='E'&&*_server<='W'){\n    int si;\n    for(si=0;si<NBAD_SERVERS;si++){\n      if(strncmp(_server,BAD_SERVERS[si],strlen(BAD_SERVERS[si]))==0){\n        return 0;\n      }\n    }\n  }\n  return 1;\n# undef NBAD_SERVERS\n}\n\nstatic int op_http_stream_open(OpusHTTPStream *_stream,const char *_url,\n int _skip_certificate_check,const char *_proxy_host,unsigned _proxy_port,\n const char *_proxy_user,const char *_proxy_pass,OpusServerInfo *_info){\n  struct addrinfo *addrs;\n  int              nredirs;\n  int              ret;\n#if defined(_WIN32)\n  op_init_winsock();\n#endif\n  ret=op_parse_url(&_stream->url,_url);\n  if(OP_UNLIKELY(ret<0))return ret;\n  if(_proxy_host!=NULL){\n    if(OP_UNLIKELY(_proxy_port>65535U))return OP_EINVAL;\n    _stream->connect_host=op_string_dup(_proxy_host);\n    _stream->connect_port=_proxy_port;\n  }\n  else{\n    _stream->connect_host=_stream->url.host;\n    _stream->connect_port=_stream->url.port;\n  }\n  addrs=NULL;\n  for(nredirs=0;nredirs<OP_REDIRECT_LIMIT;nredirs++){\n    OpusParsedURL  next_url;\n    struct timeb   start_time;\n    struct timeb   end_time;\n    char          *next;\n    char          *status_code;\n    int            minor_version_pos;\n    int            v1_1_compat;\n    /*Initialize the SSL library if necessary.*/\n    if(OP_URL_IS_SSL(&_stream->url)&&_stream->ssl_ctx==NULL){\n      SSL_CTX *ssl_ctx;\n# if OPENSSL_VERSION_NUMBER<0x10100000L\n#  if !defined(OPENSSL_NO_LOCKING)\n      /*The documentation says SSL_library_init() is not reentrant.\n        We don't want to add our own depenencies on a threading library, and it\n         appears that it's safe to call OpenSSL's locking functions before the\n         library is initialized, so that's what we'll do (really OpenSSL should\n         do this for us).\n        This doesn't guarantee that _other_ threads in the application aren't\n         calling SSL_library_init() at the same time, but there's not much we\n         can do about that.*/\n      CRYPTO_w_lock(CRYPTO_LOCK_SSL);\n#  endif\n      SSL_library_init();\n      /*Needed to get SHA2 algorithms with old OpenSSL versions.*/\n      OpenSSL_add_ssl_algorithms();\n#  if !defined(OPENSSL_NO_LOCKING)\n      CRYPTO_w_unlock(CRYPTO_LOCK_SSL);\n#  endif\n# else\n      /*Finally, OpenSSL does this for us, but as penance, it can now fail.*/\n      if(!OPENSSL_init_ssl(0,NULL))return OP_EFAULT;\n# endif\n      ssl_ctx=SSL_CTX_new(SSLv23_client_method());\n      if(ssl_ctx==NULL)return OP_EFAULT;\n      if(!_skip_certificate_check){\n        /*We don't do anything if this fails, since it just means we won't load\n           any certificates (and thus all checks will fail).\n          However, as that is probably the result of a system\n           mis-configuration, assert here to make it easier to identify.*/\n        OP_ALWAYS_TRUE(SSL_CTX_set_default_verify_paths(ssl_ctx));\n        SSL_CTX_set_verify(ssl_ctx,SSL_VERIFY_PEER,NULL);\n      }\n      _stream->ssl_ctx=ssl_ctx;\n      _stream->skip_certificate_check=_skip_certificate_check;\n      if(_proxy_host!=NULL){\n        /*We need to establish a CONNECT tunnel to handle https proxying.\n          Build the request we'll send to do so.*/\n        _stream->proxy_connect.nbuf=0;\n        ret=op_sb_append(&_stream->proxy_connect,\"CONNECT \",8);\n        ret|=op_sb_append_string(&_stream->proxy_connect,_stream->url.host);\n        ret|=op_sb_append_port(&_stream->proxy_connect,_stream->url.port);\n        /*CONNECT requires at least HTTP 1.1.*/\n        ret|=op_sb_append(&_stream->proxy_connect,\" HTTP/1.1\\r\\n\",11);\n        ret|=op_sb_append(&_stream->proxy_connect,\"Host: \",6);\n        ret|=op_sb_append_string(&_stream->proxy_connect,_stream->url.host);\n        /*The example in RFC 2817 Section 5.2 specifies an explicit port even\n           when connecting to the default port.\n          Given that the proxy doesn't know whether we're trying to connect to\n           an http or an https URL except by the port number, this seems like a\n           good idea.*/\n        ret|=op_sb_append_port(&_stream->proxy_connect,_stream->url.port);\n        ret|=op_sb_append(&_stream->proxy_connect,\"\\r\\n\",2);\n        ret|=op_sb_append(&_stream->proxy_connect,\"User-Agent: .\\r\\n\",15);\n        if(_proxy_user!=NULL&&_proxy_pass!=NULL){\n          ret|=op_sb_append_basic_auth_header(&_stream->proxy_connect,\n           \"Proxy-Authorization\",_proxy_user,_proxy_pass);\n        }\n        /*For backwards compatibility.*/\n        ret|=op_sb_append(&_stream->proxy_connect,\n         \"Proxy-Connection: keep-alive\\r\\n\",30);\n        ret|=op_sb_append(&_stream->proxy_connect,\"\\r\\n\",2);\n        if(OP_UNLIKELY(ret<0))return ret;\n      }\n    }\n    /*Actually make the connection.*/\n    ret=op_http_connect(_stream,_stream->conns+0,addrs,&start_time);\n    if(OP_UNLIKELY(ret<0))return ret;\n    /*Build the request to send.*/\n    _stream->request.nbuf=0;\n    ret=op_sb_append(&_stream->request,\"GET \",4);\n    ret|=op_sb_append_string(&_stream->request,\n     _proxy_host!=NULL?_url:_stream->url.path);\n    /*Send HTTP/1.0 by default for maximum compatibility (so we don't have to\n       re-try if HTTP/1.1 fails, though it shouldn't, even for a 1.0 server).\n      This means we aren't conditionally compliant with RFC 2145, because we\n       violate the requirement that \"An HTTP client SHOULD send a request\n       version equal to the highest version for which the client is at least\n       conditionally compliant...\".\n      According to RFC 2145, that means we can't claim any compliance with any\n       IETF HTTP specification.*/\n    ret|=op_sb_append(&_stream->request,\" HTTP/1.0\\r\\n\",11);\n    /*Remember where this is so we can upgrade to HTTP/1.1 if the server\n       supports it.*/\n    minor_version_pos=_stream->request.nbuf-3;\n    ret|=op_sb_append(&_stream->request,\"Host: \",6);\n    ret|=op_sb_append_string(&_stream->request,_stream->url.host);\n    if(!OP_URL_IS_DEFAULT_PORT(&_stream->url)){\n      ret|=op_sb_append_port(&_stream->request,_stream->url.port);\n    }\n    ret|=op_sb_append(&_stream->request,\"\\r\\n\",2);\n    /*User-Agents have been a bad idea, so send as little as possible.\n      RFC 2616 requires at least one token in the User-Agent, which must have\n       at least one character.*/\n    ret|=op_sb_append(&_stream->request,\"User-Agent: .\\r\\n\",15);\n    if(_proxy_host!=NULL&&!OP_URL_IS_SSL(&_stream->url)\n     &&_proxy_user!=NULL&&_proxy_pass!=NULL){\n      ret|=op_sb_append_basic_auth_header(&_stream->request,\n       \"Proxy-Authorization\",_proxy_user,_proxy_pass);\n    }\n    if(_stream->url.user!=NULL&&_stream->url.pass!=NULL){\n      ret|=op_sb_append_basic_auth_header(&_stream->request,\n       \"Authorization\",_stream->url.user,_stream->url.pass);\n    }\n    /*Always send a Referer [sic] header.\n      It's common to refuse to serve a resource unless one is present.\n      We just use the relative \"/\" URI to suggest we came from the same domain,\n       as this is the most common check.\n      This might violate RFC 2616's mandate that the field \"MUST NOT be sent if\n       the Request-URI was obtained from a source that does not have its own\n       URI, such as input from the user keyboard,\" but we don't really have any\n       way to know.*/\n    /*TODO: Should we update this on redirects?*/\n    ret|=op_sb_append(&_stream->request,\"Referer: /\\r\\n\",12);\n    /*Always send a Range request header to find out if we're seekable.\n      This requires an HTTP/1.1 server to succeed, but we'll still get what we\n       want with an HTTP/1.0 server that ignores this request header.*/\n    ret|=op_sb_append(&_stream->request,\"Range: bytes=0-\\r\\n\",17);\n    /*Remember where this is so we can append offsets to it later.*/\n    _stream->request_tail=_stream->request.nbuf-4;\n    ret|=op_sb_append(&_stream->request,\"\\r\\n\",2);\n    if(OP_UNLIKELY(ret<0))return ret;\n    ret=op_http_conn_write_fully(_stream->conns+0,\n     _stream->request.buf,_stream->request.nbuf);\n    if(OP_UNLIKELY(ret<0))return ret;\n    ret=op_http_conn_read_response(_stream->conns+0,&_stream->response);\n    if(OP_UNLIKELY(ret<0))return ret;\n    ftime(&end_time);\n    next=op_http_parse_status_line(&v1_1_compat,&status_code,\n     _stream->response.buf);\n    if(OP_UNLIKELY(next==NULL))return OP_FALSE;\n    if(status_code[0]=='2'){\n      opus_int64 content_length;\n      opus_int64 range_length;\n      int        pipeline_supported;\n      int        pipeline_disabled;\n      /*We only understand 20x codes.*/\n      if(status_code[1]!='0')return OP_FALSE;\n      content_length=-1;\n      range_length=-1;\n      /*Pipelining must be explicitly enabled.*/\n      pipeline_supported=0;\n      pipeline_disabled=0;\n      for(;;){\n        char *header;\n        char *cdr;\n        ret=op_http_get_next_header(&header,&cdr,&next);\n        if(OP_UNLIKELY(ret<0))return ret;\n        if(header==NULL)break;\n        if(strcmp(header,\"content-length\")==0){\n          /*Two Content-Length headers?*/\n          if(OP_UNLIKELY(content_length>=0))return OP_FALSE;\n          content_length=op_http_parse_content_length(cdr);\n          if(OP_UNLIKELY(content_length<0))return (int)content_length;\n          /*Make sure the Content-Length and Content-Range headers match.*/\n          if(range_length>=0&&OP_UNLIKELY(content_length!=range_length)){\n            return OP_FALSE;\n          }\n        }\n        else if(strcmp(header,\"content-range\")==0){\n          opus_int64 range_first;\n          opus_int64 range_last;\n          /*Two Content-Range headers?*/\n          if(OP_UNLIKELY(range_length>=0))return OP_FALSE;\n          ret=op_http_parse_content_range(&range_first,&range_last,\n           &range_length,cdr);\n          if(OP_UNLIKELY(ret<0))return ret;\n          /*\"A response with satus code 206 (Partial Content) MUST NOT\n             include a Content-Range field with a byte-range-resp-spec of\n             '*'.\"*/\n          if(status_code[2]=='6'\n           &&(OP_UNLIKELY(range_first<0)||OP_UNLIKELY(range_last<0))){\n            return OP_FALSE;\n          }\n          /*We asked for the entire resource.*/\n          if(range_length>=0){\n            /*Quit if we didn't get it.*/\n            if(range_last>=0&&OP_UNLIKELY(range_last!=range_length-1)){\n              return OP_FALSE;\n            }\n          }\n          /*If there was no length, use the end of the range.*/\n          else if(range_last>=0)range_length=range_last+1;\n          /*Make sure the Content-Length and Content-Range headers match.*/\n          if(content_length>=0&&OP_UNLIKELY(content_length!=range_length)){\n            return OP_FALSE;\n          }\n        }\n        else if(strcmp(header,\"connection\")==0){\n          /*According to RFC 2616, if an HTTP/1.1 application does not support\n             pipelining, it \"MUST include the 'close' connection option in\n             every message.\"\n            Therefore, if we receive one in the initial response, disable\n             pipelining entirely.\n            The server still might support it (e.g., we might just have hit the\n             request limit for a temporary child process), but if it doesn't\n             and we assume it does, every time we cross a chunk boundary we'll\n             error out and reconnect, adding lots of latency.*/\n          ret=op_http_parse_connection(cdr);\n          if(OP_UNLIKELY(ret<0))return ret;\n          pipeline_disabled|=ret;\n        }\n        else if(strcmp(header,\"server\")==0){\n          /*If we got a Server response header, and it wasn't from a known-bad\n             server, enable pipelining, as long as it's at least HTTP/1.1.\n            According to RFC 2145, the server is supposed to respond with the\n             highest minor version number it supports unless it is known or\n             suspected that we incorrectly implement the HTTP specification.\n            So it should send back at least HTTP/1.1, despite our HTTP/1.0\n             request.*/\n          pipeline_supported=v1_1_compat;\n          if(v1_1_compat)pipeline_disabled|=!op_http_allow_pipelining(cdr);\n          if(_info!=NULL&&_info->server==NULL)_info->server=op_string_dup(cdr);\n        }\n        /*Collect station information headers if the caller requested it.\n          If there's more than one copy of a header, the first one wins.*/\n        else if(_info!=NULL){\n          if(strcmp(header,\"content-type\")==0){\n            if(_info->content_type==NULL){\n              _info->content_type=op_string_dup(cdr);\n            }\n          }\n          else if(header[0]=='i'&&header[1]=='c'\n           &&(header[2]=='e'||header[2]=='y')&&header[3]=='-'){\n            if(strcmp(header+4,\"name\")==0){\n              if(_info->name==NULL)_info->name=op_string_dup(cdr);\n            }\n            else if(strcmp(header+4,\"description\")==0){\n              if(_info->description==NULL)_info->description=op_string_dup(cdr);\n            }\n            else if(strcmp(header+4,\"genre\")==0){\n              if(_info->genre==NULL)_info->genre=op_string_dup(cdr);\n            }\n            else if(strcmp(header+4,\"url\")==0){\n              if(_info->url==NULL)_info->url=op_string_dup(cdr);\n            }\n            else if(strcmp(header,\"icy-br\")==0\n             ||strcmp(header,\"ice-bitrate\")==0){\n              if(_info->bitrate_kbps<0){\n                opus_int64 bitrate_kbps;\n                /*Just re-using this function to parse a random unsigned\n                   integer field.*/\n                bitrate_kbps=op_http_parse_content_length(cdr);\n                if(bitrate_kbps>=0&&bitrate_kbps<=OP_INT32_MAX){\n                  _info->bitrate_kbps=(opus_int32)bitrate_kbps;\n                }\n              }\n            }\n            else if(strcmp(header,\"icy-pub\")==0\n             ||strcmp(header,\"ice-public\")==0){\n              if(_info->is_public<0&&(cdr[0]=='0'||cdr[0]=='1')&&cdr[1]=='\\0'){\n                _info->is_public=cdr[0]-'0';\n              }\n            }\n          }\n        }\n      }\n      switch(status_code[2]){\n        /*200 OK*/\n        case '0':break;\n        /*203 Non-Authoritative Information*/\n        case '3':break;\n        /*204 No Content*/\n        case '4':{\n          if(content_length>=0&&OP_UNLIKELY(content_length!=0)){\n            return OP_FALSE;\n          }\n        }break;\n        /*206 Partial Content*/\n        case '6':{\n          /*No Content-Range header.*/\n          if(OP_UNLIKELY(range_length<0))return OP_FALSE;\n          content_length=range_length;\n          /*The server supports range requests for this resource.\n            We can seek.*/\n          _stream->seekable=1;\n        }break;\n        /*201 Created: the response \"SHOULD include an entity containing a list\n           of resource characteristics and location(s),\" but not an Opus file.\n          202 Accepted: the response \"SHOULD include an indication of request's\n           current status and either a pointer to a status monitor or some\n           estimate of when the user can expect the request to be fulfilled,\"\n           but not an Opus file.\n          205 Reset Content: this \"MUST NOT include an entity,\" meaning no Opus\n           file.\n          207...209 are not yet defined, so we don't know how to handle them.*/\n        default:return OP_FALSE;\n      }\n      _stream->content_length=content_length;\n      _stream->pipeline=pipeline_supported&&!pipeline_disabled;\n      /*Pipelining requires HTTP/1.1 persistent connections.*/\n      if(_stream->pipeline)_stream->request.buf[minor_version_pos]='1';\n      _stream->conns[0].pos=0;\n      _stream->conns[0].end_pos=_stream->seekable?content_length:-1;\n      _stream->conns[0].chunk_size=-1;\n      _stream->cur_conni=0;\n      _stream->connect_rate=op_time_diff_ms(&end_time,&start_time);\n      _stream->connect_rate=OP_MAX(_stream->connect_rate,1);\n      if(_info!=NULL)_info->is_ssl=OP_URL_IS_SSL(&_stream->url);\n      /*The URL has been successfully opened.*/\n      return 0;\n    }\n    /*Shouldn't get 1xx; 4xx and 5xx are both failures (and we don't retry).\n      Everything else is undefined.*/\n    else if(status_code[0]!='3')return OP_FALSE;\n    /*We have some form of redirect request.*/\n    /*We only understand 30x codes.*/\n    if(status_code[1]!='0')return OP_FALSE;\n    switch(status_code[2]){\n      /*300 Multiple Choices: \"If the server has a preferred choice of\n         representation, it SHOULD include the specific URI for that\n         representation in the Location field,\" otherwise we'll fail.*/\n      case '0':\n      /*301 Moved Permanently*/\n      case '1':\n      /*302 Found*/\n      case '2':\n      /*307 Temporary Redirect*/\n      case '7':\n      /*308 Permanent Redirect (defined by draft-reschke-http-status-308-07).*/\n      case '8':break;\n      /*305 Use Proxy: \"The Location field gives the URI of the proxy.\"\n        TODO: This shouldn't actually be that hard to do.*/\n      case '5':return OP_EIMPL;\n      /*303 See Other: \"The new URI is not a substitute reference for the\n         originally requested resource.\"\n        304 Not Modified: \"The 304 response MUST NOT contain a message-body.\"\n        306 (Unused)\n        309 is not yet defined, so we don't know how to handle it.*/\n      default:return OP_FALSE;\n    }\n    _url=NULL;\n    for(;;){\n      char *header;\n      char *cdr;\n      ret=op_http_get_next_header(&header,&cdr,&next);\n      if(OP_UNLIKELY(ret<0))return ret;\n      if(header==NULL)break;\n      if(strcmp(header,\"location\")==0&&OP_LIKELY(_url==NULL))_url=cdr;\n    }\n    if(OP_UNLIKELY(_url==NULL))return OP_FALSE;\n    ret=op_parse_url(&next_url,_url);\n    if(OP_UNLIKELY(ret<0))return ret;\n    if(_proxy_host==NULL||_stream->ssl_session!=NULL){\n      if(strcmp(_stream->url.host,next_url.host)==0\n       &&_stream->url.port==next_url.port){\n        /*Try to skip re-resolve when connecting to the same host.*/\n        addrs=&_stream->addr_info;\n      }\n      else{\n        if(_stream->ssl_session!=NULL){\n          /*Forget any cached SSL session from the last host.*/\n          SSL_SESSION_free(_stream->ssl_session);\n          _stream->ssl_session=NULL;\n        }\n      }\n    }\n    if(_proxy_host==NULL){\n      OP_ASSERT(_stream->connect_host==_stream->url.host);\n      _stream->connect_host=next_url.host;\n      _stream->connect_port=next_url.port;\n    }\n    /*Always try to skip re-resolve for proxy connections.*/\n    else addrs=&_stream->addr_info;\n    op_parsed_url_clear(&_stream->url);\n    *&_stream->url=*&next_url;\n    /*TODO: On servers/proxies that support pipelining, we might be able to\n       re-use this connection.*/\n    op_http_conn_close(_stream,_stream->conns+0,&_stream->lru_head,1);\n  }\n  /*Redirection limit reached.*/\n  return OP_FALSE;\n}\n\nstatic int op_http_conn_send_request(OpusHTTPStream *_stream,\n OpusHTTPConn *_conn,opus_int64 _pos,opus_int32 _chunk_size,\n int _try_not_to_block){\n  opus_int64 next_end;\n  int        ret;\n  /*We shouldn't have another request outstanding.*/\n  OP_ASSERT(_conn->next_pos<0);\n  /*Build the request to send.*/\n  OP_ASSERT(_stream->request.nbuf>=_stream->request_tail);\n  _stream->request.nbuf=_stream->request_tail;\n  ret=op_sb_append_nonnegative_int64(&_stream->request,_pos);\n  ret|=op_sb_append(&_stream->request,\"-\",1);\n  if(_chunk_size>0&&OP_ADV_OFFSET(_pos,2*_chunk_size)<_stream->content_length){\n    /*We shouldn't be pipelining requests with non-HTTP/1.1 servers.*/\n    OP_ASSERT(_stream->pipeline);\n    next_end=_pos+_chunk_size;\n    ret|=op_sb_append_nonnegative_int64(&_stream->request,next_end-1);\n    /*Use a larger chunk size for our next request.*/\n    _chunk_size<<=1;\n    /*But after a while, just request the rest of the resource.*/\n    if(_chunk_size>OP_PIPELINE_CHUNK_SIZE_MAX)_chunk_size=-1;\n  }\n  else{\n    /*Either this was a non-pipelined request or we were close enough to the\n       end to just ask for the rest.*/\n    next_end=-1;\n    _chunk_size=-1;\n  }\n  ret|=op_sb_append(&_stream->request,\"\\r\\n\\r\\n\",4);\n  if(OP_UNLIKELY(ret<0))return ret;\n  /*If we don't want to block, check to see if there's enough space in the send\n     queue.\n    There's still a chance we might block, even if there is enough space, but\n     it's a much slimmer one.\n    Blocking at all is pretty unlikely, as we won't have any requests queued\n     when _try_not_to_block is set, so if FIONSPACE isn't available (e.g., on\n     Linux), just skip the test.*/\n  if(_try_not_to_block){\n# if defined(FIONSPACE)\n    int available;\n    ret=ioctl(_conn->fd,FIONSPACE,&available);\n    if(ret<0||available<_stream->request.nbuf)return 1;\n# endif\n  }\n  ret=op_http_conn_write_fully(_conn,\n   _stream->request.buf,_stream->request.nbuf);\n  if(OP_UNLIKELY(ret<0))return ret;\n  _conn->next_pos=_pos;\n  _conn->next_end=next_end;\n  /*Save the chunk size to use for the next request.*/\n  _conn->chunk_size=_chunk_size;\n  _conn->nrequests_left--;\n  return ret;\n}\n\n/*Handles the response to all requests after the first one.\n  Return: 1 if the connection was closed or timed out, 0 on success, or a\n           negative value on any other error.*/\nstatic int op_http_conn_handle_response(OpusHTTPStream *_stream,\n OpusHTTPConn *_conn){\n  char       *next;\n  char       *status_code;\n  opus_int64  range_length;\n  opus_int64  next_pos;\n  opus_int64  next_end;\n  int         ret;\n  ret=op_http_conn_read_response(_conn,&_stream->response);\n  /*If the server just closed the connection on us, we may have just hit a\n     connection re-use limit, so we might want to retry.*/\n  if(OP_UNLIKELY(ret<0))return ret==OP_EREAD?1:ret;\n  next=op_http_parse_status_line(NULL,&status_code,_stream->response.buf);\n  if(OP_UNLIKELY(next==NULL))return OP_FALSE;\n  /*We _need_ a 206 Partial Content response.\n    Nothing else will do.*/\n  if(strncmp(status_code,\"206\",3)!=0){\n    /*But on a 408 Request Timeout, we might want to re-try.*/\n    return strncmp(status_code,\"408\",3)==0?1:OP_FALSE;\n  }\n  next_pos=_conn->next_pos;\n  next_end=_conn->next_end;\n  range_length=-1;\n  for(;;){\n    char *header;\n    char *cdr;\n    ret=op_http_get_next_header(&header,&cdr,&next);\n    if(OP_UNLIKELY(ret<0))return ret;\n    if(header==NULL)break;\n    if(strcmp(header,\"content-range\")==0){\n      opus_int64 range_first;\n      opus_int64 range_last;\n      /*Two Content-Range headers?*/\n      if(OP_UNLIKELY(range_length>=0))return OP_FALSE;\n      ret=op_http_parse_content_range(&range_first,&range_last,\n       &range_length,cdr);\n      if(OP_UNLIKELY(ret<0))return ret;\n      /*\"A response with satus code 206 (Partial Content) MUST NOT\n         include a Content-Range field with a byte-range-resp-spec of\n         '*'.\"*/\n      if(OP_UNLIKELY(range_first<0)||OP_UNLIKELY(range_last<0))return OP_FALSE;\n      /*We also don't want range_last to overflow.*/\n      if(OP_UNLIKELY(range_last>=OP_INT64_MAX))return OP_FALSE;\n      range_last++;\n      /*Quit if we didn't get the offset we asked for.*/\n      if(range_first!=next_pos)return OP_FALSE;\n      if(next_end<0){\n        /*We asked for the rest of the resource.*/\n        if(range_length>=0){\n          /*Quit if we didn't get it.*/\n          if(OP_UNLIKELY(range_last!=range_length))return OP_FALSE;\n        }\n        /*If there was no length, use the end of the range.*/\n        else range_length=range_last;\n        next_end=range_last;\n      }\n      else{\n        if(range_last!=next_end)return OP_FALSE;\n        /*If there was no length, use the larger of the content length or the\n           end of this chunk.*/\n        if(range_length<0){\n          range_length=OP_MAX(range_last,_stream->content_length);\n        }\n      }\n    }\n    else if(strcmp(header,\"content-length\")==0){\n      opus_int64 content_length;\n      /*Validate the Content-Length header, if present, against the request we\n         made.*/\n      content_length=op_http_parse_content_length(cdr);\n      if(OP_UNLIKELY(content_length<0))return (int)content_length;\n      if(next_end<0){\n        /*If we haven't seen the Content-Range header yet and we asked for the\n            rest of the resource, set next_end, so we can make sure they match\n            when we do find the Content-Range header.*/\n        if(OP_UNLIKELY(next_pos>OP_INT64_MAX-content_length))return OP_FALSE;\n        next_end=next_pos+content_length;\n      }\n      /*Otherwise, make sure they match now.*/\n      else if(OP_UNLIKELY(next_end-next_pos!=content_length))return OP_FALSE;\n    }\n    else if(strcmp(header,\"connection\")==0){\n      ret=op_http_parse_connection(cdr);\n      if(OP_UNLIKELY(ret<0))return ret;\n      /*If the server told us it was going to close the connection, don't make\n         any more requests.*/\n      if(OP_UNLIKELY(ret>0))_conn->nrequests_left=0;\n    }\n  }\n  /*No Content-Range header.*/\n  if(OP_UNLIKELY(range_length<0))return OP_FALSE;\n  /*Update the content_length if necessary.*/\n  _stream->content_length=range_length;\n  _conn->pos=next_pos;\n  _conn->end_pos=next_end;\n  _conn->next_pos=-1;\n  return 0;\n}\n\n/*Open a new connection that will start reading at byte offset _pos.\n  _pos:        The byte offset to start reading from.\n  _chunk_size: The number of bytes to ask for in the initial request, or -1 to\n                request the rest of the resource.\n               This may be more bytes than remain, in which case it will be\n                converted into a request for the rest.*/\nstatic int op_http_conn_open_pos(OpusHTTPStream *_stream,\n OpusHTTPConn *_conn,opus_int64 _pos,opus_int32 _chunk_size){\n  struct timeb  start_time;\n  struct timeb  end_time;\n  opus_int32    connect_rate;\n  opus_int32    connect_time;\n  int           ret;\n  ret=op_http_connect(_stream,_conn,&_stream->addr_info,&start_time);\n  if(OP_UNLIKELY(ret<0))return ret;\n  ret=op_http_conn_send_request(_stream,_conn,_pos,_chunk_size,0);\n  if(OP_UNLIKELY(ret<0))return ret;\n  ret=op_http_conn_handle_response(_stream,_conn);\n  if(OP_UNLIKELY(ret!=0))return OP_FALSE;\n  ftime(&end_time);\n  _stream->cur_conni=(int)(_conn-_stream->conns);\n  OP_ASSERT(_stream->cur_conni>=0&&_stream->cur_conni<OP_NCONNS_MAX);\n  /*The connection has been successfully opened.\n    Update the connection time estimate.*/\n  connect_time=op_time_diff_ms(&end_time,&start_time);\n  connect_rate=_stream->connect_rate;\n  connect_rate+=OP_MAX(connect_time,1)-connect_rate+8>>4;\n  _stream->connect_rate=connect_rate;\n  return 0;\n}\n\n/*Read data from the current response body.\n  If we're pipelining and we get close to the end of this response, queue\n   another request.\n  If we've reached the end of this response body, parse the next response and\n   keep going.\n  [out] _buf: Returns the data read.\n  _buf_size:  The size of the buffer.\n  Return: A positive number of bytes read on success.\n          0:        The connection was closed.\n          OP_EREAD: There was a fatal read error.*/\nstatic int op_http_conn_read_body(OpusHTTPStream *_stream,\n OpusHTTPConn *_conn,unsigned char *_buf,int _buf_size){\n  opus_int64 pos;\n  opus_int64 end_pos;\n  opus_int64 next_pos;\n  opus_int64 content_length;\n  int        nread;\n  int        pipeline;\n  int        ret;\n  /*Currently this function can only be called on the LRU head.\n    Otherwise, we'd need a _pnext pointer if we needed to close the connection,\n     and re-opening it would re-organize the lists.*/\n  OP_ASSERT(_stream->lru_head==_conn);\n  /*We should have filtered out empty reads by this point.*/\n  OP_ASSERT(_buf_size>0);\n  pos=_conn->pos;\n  end_pos=_conn->end_pos;\n  next_pos=_conn->next_pos;\n  pipeline=_stream->pipeline;\n  content_length=_stream->content_length;\n  if(end_pos>=0){\n    /*Have we reached the end of the current response body?*/\n    if(pos>=end_pos){\n      OP_ASSERT(content_length>=0);\n      /*If this was the end of the stream, we're done.\n        Also return early if a non-blocking read was requested (regardless of\n         whether we might be able to parse the next response without\n         blocking).*/\n      if(content_length<=end_pos)return 0;\n      /*Otherwise, start on the next response.*/\n      if(next_pos<0){\n        /*We haven't issued another request yet.*/\n        if(!pipeline||_conn->nrequests_left<=0){\n          /*There are two ways to get here: either the server told us it was\n             going to close the connection after the last request, or we\n             thought we were reading the whole resource, but it grew while we\n             were reading it.\n            The only way the latter could have happened is if content_length\n             changed while seeking.\n            Open a new request to read the rest.*/\n          OP_ASSERT(_stream->seekable);\n          /*Try to open a new connection to read another chunk.*/\n          op_http_conn_close(_stream,_conn,&_stream->lru_head,1);\n          /*If we're not pipelining, we should be requesting the rest.*/\n          OP_ASSERT(pipeline||_conn->chunk_size==-1);\n          ret=op_http_conn_open_pos(_stream,_conn,end_pos,_conn->chunk_size);\n          if(OP_UNLIKELY(ret<0))return OP_EREAD;\n        }\n        else{\n          /*Issue the request now (better late than never).*/\n          ret=op_http_conn_send_request(_stream,_conn,pos,_conn->chunk_size,0);\n          if(OP_UNLIKELY(ret<0))return OP_EREAD;\n          next_pos=_conn->next_pos;\n          OP_ASSERT(next_pos>=0);\n        }\n      }\n      if(next_pos>=0){\n        /*We shouldn't be trying to read past the current request body if we're\n           seeking somewhere else.*/\n        OP_ASSERT(next_pos==end_pos);\n        ret=op_http_conn_handle_response(_stream,_conn);\n        if(OP_UNLIKELY(ret<0))return OP_EREAD;\n        if(OP_UNLIKELY(ret>0)&&pipeline){\n          opus_int64 next_end;\n          next_end=_conn->next_end;\n          /*Our request timed out or the server closed the connection.\n            Try re-connecting.*/\n          op_http_conn_close(_stream,_conn,&_stream->lru_head,1);\n          /*Unless there's a bug, we should be able to convert\n             (next_pos,next_end) into valid (_pos,_chunk_size) parameters.*/\n          OP_ASSERT(next_end<0\n           ||next_end-next_pos>=0&&next_end-next_pos<=OP_INT32_MAX);\n          ret=op_http_conn_open_pos(_stream,_conn,next_pos,\n           next_end<0?-1:(opus_int32)(next_end-next_pos));\n          if(OP_UNLIKELY(ret<0))return OP_EREAD;\n        }\n        else if(OP_UNLIKELY(ret!=0))return OP_EREAD;\n      }\n      pos=_conn->pos;\n      end_pos=_conn->end_pos;\n      content_length=_stream->content_length;\n    }\n    OP_ASSERT(end_pos>pos);\n    _buf_size=(int)OP_MIN(_buf_size,end_pos-pos);\n  }\n  nread=op_http_conn_read(_conn,(char *)_buf,_buf_size,1);\n  if(OP_UNLIKELY(nread<0))return nread;\n  pos+=nread;\n  _conn->pos=pos;\n  OP_ASSERT(end_pos<0||content_length>=0);\n  /*TODO: If nrequests_left<=0, we can't make a new request, and there will be\n     a big pause after we hit the end of the chunk while we open a new\n     connection.\n    It would be nice to be able to start that process now, but we have no way\n     to do it in the background without blocking (even if we could start it, we\n     have no guarantee the application will return control to us in a\n     sufficiently timely manner to allow us to complete it, and this is\n     uncommon enough that it's not worth using threads just for this).*/\n  if(end_pos>=0&&end_pos<content_length&&next_pos<0\n   &&pipeline&&OP_LIKELY(_conn->nrequests_left>0)){\n    opus_int64 request_thresh;\n    opus_int32 chunk_size;\n    /*Are we getting close to the end of the current response body?\n      If so, we should request more data.*/\n    request_thresh=_stream->connect_rate*_conn->read_rate>>12;\n    /*But don't commit ourselves too quickly.*/\n    chunk_size=_conn->chunk_size;\n    if(chunk_size>=0)request_thresh=OP_MIN(chunk_size>>2,request_thresh);\n    if(end_pos-pos<request_thresh){\n      ret=op_http_conn_send_request(_stream,_conn,end_pos,_conn->chunk_size,1);\n      if(OP_UNLIKELY(ret<0))return OP_EREAD;\n    }\n  }\n  return nread;\n}\n\nstatic int op_http_stream_read(void *_stream,\n unsigned char *_ptr,int _buf_size){\n  OpusHTTPStream *stream;\n  int             nread;\n  opus_int64      size;\n  opus_int64      pos;\n  int             ci;\n  stream=(OpusHTTPStream *)_stream;\n  /*Check for an empty read.*/\n  if(_buf_size<=0)return 0;\n  ci=stream->cur_conni;\n  /*No current connection => EOF.*/\n  if(ci<0)return 0;\n  pos=stream->conns[ci].pos;\n  size=stream->content_length;\n  /*Check for EOF.*/\n  if(size>=0){\n    if(pos>=size)return 0;\n    /*Check for a short read.*/\n    if(_buf_size>size-pos)_buf_size=(int)(size-pos);\n  }\n  nread=op_http_conn_read_body(stream,stream->conns+ci,_ptr,_buf_size);\n  if(OP_UNLIKELY(nread<=0)){\n    /*We hit an error or EOF.\n      Either way, we're done with this connection.*/\n    op_http_conn_close(stream,stream->conns+ci,&stream->lru_head,1);\n    stream->cur_conni=-1;\n    stream->pos=pos;\n  }\n  return nread;\n}\n\n/*Discard data until we reach the _target position.\n  This destroys the contents of _stream->response.buf, as we need somewhere to\n   read this data, and that is a convenient place.\n  _just_read_ahead: Whether or not this is a plain fast-forward.\n                    If 0, we need to issue a new request for a chunk at _target\n                     and discard all the data from our current request(s).\n                    Otherwise, we should be able to reach _target without\n                     issuing any new requests.\n  _target:          The stream position to which to read ahead.*/\nstatic int op_http_conn_read_ahead(OpusHTTPStream *_stream,\n OpusHTTPConn *_conn,int _just_read_ahead,opus_int64 _target){\n  opus_int64 pos;\n  opus_int64 end_pos;\n  opus_int64 next_pos;\n  opus_int64 next_end;\n  ptrdiff_t  nread;\n  int        ret;\n  pos=_conn->pos;\n  end_pos=_conn->end_pos;\n  next_pos=_conn->next_pos;\n  next_end=_conn->next_end;\n  if(!_just_read_ahead){\n    /*We need to issue a new pipelined request.\n      This is the only case where we allow more than one outstanding request\n       at a time, so we need to reset next_pos (we'll restore it below if we\n       did have an outstanding request).*/\n    OP_ASSERT(_stream->pipeline);\n    _conn->next_pos=-1;\n    ret=op_http_conn_send_request(_stream,_conn,_target,\n     OP_PIPELINE_CHUNK_SIZE,0);\n    if(OP_UNLIKELY(ret<0))return ret;\n  }\n  /*We can reach the target position by reading forward in the current chunk.*/\n  if(_just_read_ahead&&(end_pos<0||_target<end_pos))end_pos=_target;\n  else if(next_pos>=0){\n    opus_int64 next_next_pos;\n    opus_int64 next_next_end;\n    /*We already have a request outstanding.\n      Finish off the current chunk.*/\n    while(pos<end_pos){\n      nread=op_http_conn_read(_conn,_stream->response.buf,\n       (int)OP_MIN(end_pos-pos,_stream->response.cbuf),1);\n      /*We failed to read ahead.*/\n      if(nread<=0)return OP_FALSE;\n      pos+=nread;\n    }\n    OP_ASSERT(pos==end_pos);\n    if(_just_read_ahead){\n      next_next_pos=next_next_end=-1;\n      end_pos=_target;\n    }\n    else{\n      OP_ASSERT(_conn->next_pos==_target);\n      next_next_pos=_target;\n      next_next_end=_conn->next_end;\n      _conn->next_pos=next_pos;\n      _conn->next_end=next_end;\n      end_pos=next_end;\n    }\n    ret=op_http_conn_handle_response(_stream,_conn);\n    if(OP_UNLIKELY(ret!=0))return OP_FALSE;\n    _conn->next_pos=next_next_pos;\n    _conn->next_end=next_next_end;\n  }\n  while(pos<end_pos){\n    nread=op_http_conn_read(_conn,_stream->response.buf,\n     (int)OP_MIN(end_pos-pos,_stream->response.cbuf),1);\n    /*We failed to read ahead.*/\n    if(nread<=0)return OP_FALSE;\n    pos+=nread;\n  }\n  OP_ASSERT(pos==end_pos);\n  if(!_just_read_ahead){\n    ret=op_http_conn_handle_response(_stream,_conn);\n    if(OP_UNLIKELY(ret!=0))return OP_FALSE;\n  }\n  else _conn->pos=end_pos;\n  OP_ASSERT(_conn->pos==_target);\n  return 0;\n}\n\nstatic int op_http_stream_seek(void *_stream,opus_int64 _offset,int _whence){\n  struct timeb     seek_time;\n  OpusHTTPStream  *stream;\n  OpusHTTPConn    *conn;\n  OpusHTTPConn   **pnext;\n  OpusHTTPConn    *close_conn;\n  OpusHTTPConn   **close_pnext;\n  opus_int64       content_length;\n  opus_int64       pos;\n  int              pipeline;\n  int              ci;\n  int              ret;\n  stream=(OpusHTTPStream *)_stream;\n  if(!stream->seekable)return -1;\n  content_length=stream->content_length;\n  /*If we're seekable, we should have gotten a Content-Length.*/\n  OP_ASSERT(content_length>=0);\n  ci=stream->cur_conni;\n  pos=ci<0?content_length:stream->conns[ci].pos;\n  switch(_whence){\n    case SEEK_SET:{\n      /*Check for overflow:*/\n      if(_offset<0)return -1;\n      pos=_offset;\n    }break;\n    case SEEK_CUR:{\n      /*Check for overflow:*/\n      if(_offset<-pos||_offset>OP_INT64_MAX-pos)return -1;\n      pos+=_offset;\n    }break;\n    case SEEK_END:{\n      /*Check for overflow:*/\n      if(_offset>content_length||_offset<content_length-OP_INT64_MAX)return -1;\n      pos=content_length-_offset;\n    }break;\n    default:return -1;\n  }\n  /*Mark when we deactivated the active connection.*/\n  if(ci>=0){\n    op_http_conn_read_rate_update(stream->conns+ci);\n    *&seek_time=*&stream->conns[ci].read_time;\n  }\n  else ftime(&seek_time);\n  /*If we seeked past the end of the stream, just disable the active\n     connection.*/\n  if(pos>=content_length){\n    stream->cur_conni=-1;\n    stream->pos=pos;\n    return 0;\n  }\n  /*First try to find a connection we can use without waiting.*/\n  pnext=&stream->lru_head;\n  conn=stream->lru_head;\n  while(conn!=NULL){\n    opus_int64 conn_pos;\n    opus_int64 end_pos;\n    int        available;\n    /*If this connection has been dormant too long or has made too many\n       requests, close it.\n      This is to prevent us from hitting server limits/firewall timeouts.*/\n    if(op_time_diff_ms(&seek_time,&conn->read_time)>\n     OP_CONNECTION_IDLE_TIMEOUT_MS\n     ||conn->nrequests_left<OP_PIPELINE_MIN_REQUESTS){\n      op_http_conn_close(stream,conn,pnext,1);\n      conn=*pnext;\n      continue;\n    }\n    available=op_http_conn_estimate_available(conn);\n    conn_pos=conn->pos;\n    end_pos=conn->end_pos;\n    if(conn->next_pos>=0){\n      OP_ASSERT(end_pos>=0);\n      OP_ASSERT(conn->next_pos==end_pos);\n      end_pos=conn->next_end;\n    }\n    OP_ASSERT(end_pos<0||conn_pos<=end_pos);\n    /*Can we quickly read ahead without issuing a new request or waiting for\n       any more data?\n      If we have an oustanding request, we'll over-estimate the amount of data\n       it has available (because we'll count the response headers, too), but\n       that probably doesn't matter.*/\n    if(conn_pos<=pos&&pos-conn_pos<=available&&(end_pos<0||pos<end_pos)){\n      /*Found a suitable connection to re-use.*/\n      ret=op_http_conn_read_ahead(stream,conn,1,pos);\n      if(OP_UNLIKELY(ret<0)){\n        /*The connection might have become stale, so close it and keep going.*/\n        op_http_conn_close(stream,conn,pnext,1);\n        conn=*pnext;\n        continue;\n      }\n      /*Sucessfully resurrected this connection.*/\n      *pnext=conn->next;\n      conn->next=stream->lru_head;\n      stream->lru_head=conn;\n      stream->cur_conni=(int)(conn-stream->conns);\n      OP_ASSERT(stream->cur_conni>=0&&stream->cur_conni<OP_NCONNS_MAX);\n      return 0;\n    }\n    pnext=&conn->next;\n    conn=conn->next;\n  }\n  /*Chances are that didn't work, so now try to find one we can use by reading\n     ahead a reasonable amount and/or by issuing a new request.*/\n  close_pnext=NULL;\n  close_conn=NULL;\n  pnext=&stream->lru_head;\n  conn=stream->lru_head;\n  pipeline=stream->pipeline;\n  while(conn!=NULL){\n    opus_int64 conn_pos;\n    opus_int64 end_pos;\n    opus_int64 read_ahead_thresh;\n    int        available;\n    int        just_read_ahead;\n    /*Dividing by 2048 instead of 1000 scales this by nearly 1/2, biasing away\n       from connection re-use (and roughly compensating for the lag required to\n       reopen the TCP window of a connection that's been idle).\n      There's no overflow checking here, because it's vanishingly unlikely, and\n       all it would do is cause us to make poor decisions.*/\n    read_ahead_thresh=OP_MAX(OP_READAHEAD_THRESH_MIN,\n     stream->connect_rate*conn->read_rate>>11);\n    available=op_http_conn_estimate_available(conn);\n    conn_pos=conn->pos;\n    end_pos=conn->end_pos;\n    if(conn->next_pos>=0){\n      OP_ASSERT(end_pos>=0);\n      OP_ASSERT(conn->next_pos==end_pos);\n      end_pos=conn->next_end;\n    }\n    OP_ASSERT(end_pos<0||conn_pos<=end_pos);\n    /*Can we quickly read ahead without issuing a new request?*/\n    just_read_ahead=conn_pos<=pos&&pos-conn_pos-available<=read_ahead_thresh\n     &&(end_pos<0||pos<end_pos);\n    if(just_read_ahead||pipeline&&end_pos>=0\n     &&end_pos-conn_pos-available<=read_ahead_thresh){\n      /*Found a suitable connection to re-use.*/\n      ret=op_http_conn_read_ahead(stream,conn,just_read_ahead,pos);\n      if(OP_UNLIKELY(ret<0)){\n        /*The connection might have become stale, so close it and keep going.*/\n        op_http_conn_close(stream,conn,pnext,1);\n        conn=*pnext;\n        continue;\n      }\n      /*Sucessfully resurrected this connection.*/\n      *pnext=conn->next;\n      conn->next=stream->lru_head;\n      stream->lru_head=conn;\n      stream->cur_conni=(int)(conn-stream->conns);\n      OP_ASSERT(stream->cur_conni>=0&&stream->cur_conni<OP_NCONNS_MAX);\n      return 0;\n    }\n    close_pnext=pnext;\n    close_conn=conn;\n    pnext=&conn->next;\n    conn=conn->next;\n  }\n  /*No suitable connections.\n    Open a new one.*/\n  if(stream->free_head==NULL){\n    /*All connections in use.\n      Expire one of them (we should have already picked which one when scanning\n       the list).*/\n    OP_ASSERT(close_conn!=NULL);\n    OP_ASSERT(close_pnext!=NULL);\n    op_http_conn_close(stream,close_conn,close_pnext,1);\n  }\n  OP_ASSERT(stream->free_head!=NULL);\n  conn=stream->free_head;\n  /*If we can pipeline, only request a chunk of data.\n    If we're seeking now, there's a good chance we will want to seek again\n     soon, and this avoids committing this connection to reading the rest of\n     the stream.\n    Particularly with SSL or proxies, issuing a new request on the same\n     connection can be substantially faster than opening a new one.\n    This also limits the amount of data the server will blast at us on this\n     connection if we later seek elsewhere and start reading from a different\n     connection.*/\n  ret=op_http_conn_open_pos(stream,conn,pos,\n   pipeline?OP_PIPELINE_CHUNK_SIZE:-1);\n  if(OP_UNLIKELY(ret<0)){\n    op_http_conn_close(stream,conn,&stream->lru_head,1);\n    return -1;\n  }\n  return 0;\n}\n\nstatic opus_int64 op_http_stream_tell(void *_stream){\n  OpusHTTPStream *stream;\n  int             ci;\n  stream=(OpusHTTPStream *)_stream;\n  ci=stream->cur_conni;\n  return ci<0?stream->pos:stream->conns[ci].pos;\n}\n\nstatic int op_http_stream_close(void *_stream){\n  OpusHTTPStream *stream;\n  stream=(OpusHTTPStream *)_stream;\n  if(OP_LIKELY(stream!=NULL)){\n    op_http_stream_clear(stream);\n    _ogg_free(stream);\n  }\n  return 0;\n}\n\nstatic const OpusFileCallbacks OP_HTTP_CALLBACKS={\n  op_http_stream_read,\n  op_http_stream_seek,\n  op_http_stream_tell,\n  op_http_stream_close\n};\n#endif\n\nvoid opus_server_info_init(OpusServerInfo *_info){\n  _info->name=NULL;\n  _info->description=NULL;\n  _info->genre=NULL;\n  _info->url=NULL;\n  _info->server=NULL;\n  _info->content_type=NULL;\n  _info->bitrate_kbps=-1;\n  _info->is_public=-1;\n  _info->is_ssl=0;\n}\n\nvoid opus_server_info_clear(OpusServerInfo *_info){\n  _ogg_free(_info->content_type);\n  _ogg_free(_info->server);\n  _ogg_free(_info->url);\n  _ogg_free(_info->genre);\n  _ogg_free(_info->description);\n  _ogg_free(_info->name);\n}\n\n/*The actual URL stream creation function.\n  This one isn't extensible like the application-level interface, but because\n   it isn't public, we're free to change it in the future.*/\nstatic void *op_url_stream_create_impl(OpusFileCallbacks *_cb,const char *_url,\n int _skip_certificate_check,const char *_proxy_host,unsigned _proxy_port,\n const char *_proxy_user,const char *_proxy_pass,OpusServerInfo *_info){\n  const char *path;\n  /*Check to see if this is a valid file: URL.*/\n  path=op_parse_file_url(_url);\n  if(path!=NULL){\n    char *unescaped_path;\n    void *ret;\n    unescaped_path=op_string_dup(path);\n    if(OP_UNLIKELY(unescaped_path==NULL))return NULL;\n    ret=op_fopen(_cb,op_unescape_url_component(unescaped_path),\"rb\");\n    _ogg_free(unescaped_path);\n    return ret;\n  }\n#if defined(OP_ENABLE_HTTP)\n  /*If not, try http/https.*/\n  else{\n    OpusHTTPStream *stream;\n    int             ret;\n    stream=(OpusHTTPStream *)_ogg_malloc(sizeof(*stream));\n    if(OP_UNLIKELY(stream==NULL))return NULL;\n    op_http_stream_init(stream);\n    ret=op_http_stream_open(stream,_url,_skip_certificate_check,\n     _proxy_host,_proxy_port,_proxy_user,_proxy_pass,_info);\n    if(OP_UNLIKELY(ret<0)){\n      op_http_stream_clear(stream);\n      _ogg_free(stream);\n      return NULL;\n    }\n    *_cb=*&OP_HTTP_CALLBACKS;\n    return stream;\n  }\n#else\n  (void)_skip_certificate_check;\n  (void)_proxy_host;\n  (void)_proxy_port;\n  (void)_proxy_user;\n  (void)_proxy_pass;\n  (void)_info;\n  return NULL;\n#endif\n}\n\n/*The actual implementation of op_url_stream_vcreate().\n  We have to do a careful dance here to avoid potential memory leaks if\n   OpusServerInfo is requested, since this function is also used by\n   op_vopen_url() and op_vtest_url().\n  Even if this function succeeds, those functions might ultimately fail.\n  If they do, they should return without having touched the OpusServerInfo\n   passed by the application.\n  Therefore, if this function succeeds and OpusServerInfo is requested, the\n   actual info will be stored in *_info and a pointer to the application's\n   storage will be placed in *_pinfo.\n  If this function fails or if the application did not request OpusServerInfo,\n   *_pinfo will be NULL.\n  Our caller is responsible for copying *_info to **_pinfo if it ultimately\n   succeeds, or for clearing *_info if it ultimately fails.*/\nstatic void *op_url_stream_vcreate_impl(OpusFileCallbacks *_cb,\n const char *_url,OpusServerInfo *_info,OpusServerInfo **_pinfo,va_list _ap){\n  int             skip_certificate_check;\n  const char     *proxy_host;\n  opus_int32      proxy_port;\n  const char     *proxy_user;\n  const char     *proxy_pass;\n  OpusServerInfo *pinfo;\n  skip_certificate_check=0;\n  proxy_host=NULL;\n  proxy_port=8080;\n  proxy_user=NULL;\n  proxy_pass=NULL;\n  pinfo=NULL;\n  *_pinfo=NULL;\n  for(;;){\n    ptrdiff_t request;\n    request=va_arg(_ap,char *)-(char *)NULL;\n    /*If we hit NULL, we're done processing options.*/\n    if(!request)break;\n    switch(request){\n      case OP_SSL_SKIP_CERTIFICATE_CHECK_REQUEST:{\n        skip_certificate_check=!!va_arg(_ap,opus_int32);\n      }break;\n      case OP_HTTP_PROXY_HOST_REQUEST:{\n        proxy_host=va_arg(_ap,const char *);\n      }break;\n      case OP_HTTP_PROXY_PORT_REQUEST:{\n        proxy_port=va_arg(_ap,opus_int32);\n        if(proxy_port<0||proxy_port>(opus_int32)65535)return NULL;\n      }break;\n      case OP_HTTP_PROXY_USER_REQUEST:{\n        proxy_user=va_arg(_ap,const char *);\n      }break;\n      case OP_HTTP_PROXY_PASS_REQUEST:{\n        proxy_pass=va_arg(_ap,const char *);\n      }break;\n      case OP_GET_SERVER_INFO_REQUEST:{\n        pinfo=va_arg(_ap,OpusServerInfo *);\n      }break;\n      /*Some unknown option.*/\n      default:return NULL;\n    }\n  }\n  /*If the caller has requested server information, proxy it to a local copy to\n     simplify error handling.*/\n  if(pinfo!=NULL){\n    void *ret;\n    opus_server_info_init(_info);\n    ret=op_url_stream_create_impl(_cb,_url,skip_certificate_check,\n     proxy_host,proxy_port,proxy_user,proxy_pass,_info);\n    if(ret!=NULL)*_pinfo=pinfo;\n    else opus_server_info_clear(_info);\n    return ret;\n  }\n  return op_url_stream_create_impl(_cb,_url,skip_certificate_check,\n   proxy_host,proxy_port,proxy_user,proxy_pass,NULL);\n}\n\nvoid *op_url_stream_vcreate(OpusFileCallbacks *_cb,\n const char *_url,va_list _ap){\n  OpusServerInfo   info;\n  OpusServerInfo *pinfo;\n  void *ret;\n  ret=op_url_stream_vcreate_impl(_cb,_url,&info,&pinfo,_ap);\n  if(pinfo!=NULL)*pinfo=*&info;\n  return ret;\n}\n\nvoid *op_url_stream_create(OpusFileCallbacks *_cb,\n const char *_url,...){\n  va_list  ap;\n  void    *ret;\n  va_start(ap,_url);\n  ret=op_url_stream_vcreate(_cb,_url,ap);\n  va_end(ap);\n  return ret;\n}\n\n/*Convenience routines to open/test URLs in a single step.*/\n\nOggOpusFile *op_vopen_url(const char *_url,int *_error,va_list _ap){\n  OpusFileCallbacks  cb;\n  OggOpusFile       *of;\n  OpusServerInfo     info;\n  OpusServerInfo    *pinfo;\n  void              *source;\n  source=op_url_stream_vcreate_impl(&cb,_url,&info,&pinfo,_ap);\n  if(OP_UNLIKELY(source==NULL)){\n    OP_ASSERT(pinfo==NULL);\n    if(_error!=NULL)*_error=OP_EFAULT;\n    return NULL;\n  }\n  of=op_open_callbacks(source,&cb,NULL,0,_error);\n  if(OP_UNLIKELY(of==NULL)){\n    if(pinfo!=NULL)opus_server_info_clear(&info);\n    (*cb.close)(source);\n  }\n  else if(pinfo!=NULL)*pinfo=*&info;\n  return of;\n}\n\nOggOpusFile *op_open_url(const char *_url,int *_error,...){\n  OggOpusFile *ret;\n  va_list      ap;\n  va_start(ap,_error);\n  ret=op_vopen_url(_url,_error,ap);\n  va_end(ap);\n  return ret;\n}\n\nOggOpusFile *op_vtest_url(const char *_url,int *_error,va_list _ap){\n  OpusFileCallbacks  cb;\n  OggOpusFile       *of;\n  OpusServerInfo     info;\n  OpusServerInfo    *pinfo;\n  void              *source;\n  source=op_url_stream_vcreate_impl(&cb,_url,&info,&pinfo,_ap);\n  if(OP_UNLIKELY(source==NULL)){\n    OP_ASSERT(pinfo==NULL);\n    if(_error!=NULL)*_error=OP_EFAULT;\n    return NULL;\n  }\n  of=op_test_callbacks(source,&cb,NULL,0,_error);\n  if(OP_UNLIKELY(of==NULL)){\n    if(pinfo!=NULL)opus_server_info_clear(&info);\n    (*cb.close)(source);\n  }\n  else if(pinfo!=NULL)*pinfo=*&info;\n  return of;\n}\n\nOggOpusFile *op_test_url(const char *_url,int *_error,...){\n  OggOpusFile *ret;\n  va_list      ap;\n  va_start(ap,_error);\n  ret=op_vtest_url(_url,_error,ap);\n  va_end(ap);\n  return ret;\n}\n"
  },
  {
    "path": "ThirdParty/opusfile-0.9/src/info.c",
    "content": "/********************************************************************\n *                                                                  *\n * THIS FILE IS PART OF THE libopusfile SOFTWARE CODEC SOURCE CODE. *\n * USE, DISTRIBUTION AND REPRODUCTION OF THIS LIBRARY SOURCE IS     *\n * GOVERNED BY A BSD-STYLE SOURCE LICENSE INCLUDED WITH THIS SOURCE *\n * IN 'COPYING'. PLEASE READ THESE TERMS BEFORE DISTRIBUTING.       *\n *                                                                  *\n * THE libopusfile SOURCE CODE IS (C) COPYRIGHT 2012                *\n * by the Xiph.Org Foundation and contributors http://www.xiph.org/ *\n *                                                                  *\n ********************************************************************/\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include \"internal.h\"\n#include <limits.h>\n#include <string.h>\n\nstatic unsigned op_parse_uint16le(const unsigned char *_data){\n  return _data[0]|_data[1]<<8;\n}\n\nstatic int op_parse_int16le(const unsigned char *_data){\n  int ret;\n  ret=_data[0]|_data[1]<<8;\n  return (ret^0x8000)-0x8000;\n}\n\nstatic opus_uint32 op_parse_uint32le(const unsigned char *_data){\n  return _data[0]|(opus_uint32)_data[1]<<8|\n   (opus_uint32)_data[2]<<16|(opus_uint32)_data[3]<<24;\n}\n\nstatic opus_uint32 op_parse_uint32be(const unsigned char *_data){\n  return _data[3]|(opus_uint32)_data[2]<<8|\n   (opus_uint32)_data[1]<<16|(opus_uint32)_data[0]<<24;\n}\n\nint opus_head_parse(OpusHead *_head,const unsigned char *_data,size_t _len){\n  OpusHead head;\n  if(_len<8)return OP_ENOTFORMAT;\n  if(memcmp(_data,\"OpusHead\",8)!=0)return OP_ENOTFORMAT;\n  if(_len<9)return OP_EBADHEADER;\n  head.version=_data[8];\n  if(head.version>15)return OP_EVERSION;\n  if(_len<19)return OP_EBADHEADER;\n  head.channel_count=_data[9];\n  head.pre_skip=op_parse_uint16le(_data+10);\n  head.input_sample_rate=op_parse_uint32le(_data+12);\n  head.output_gain=op_parse_int16le(_data+16);\n  head.mapping_family=_data[18];\n  if(head.mapping_family==0){\n    if(head.channel_count<1||head.channel_count>2)return OP_EBADHEADER;\n    if(head.version<=1&&_len>19)return OP_EBADHEADER;\n    head.stream_count=1;\n    head.coupled_count=head.channel_count-1;\n    if(_head!=NULL){\n      _head->mapping[0]=0;\n      _head->mapping[1]=1;\n    }\n  }\n  else if(head.mapping_family==1){\n    size_t size;\n    int    ci;\n    if(head.channel_count<1||head.channel_count>8)return OP_EBADHEADER;\n    size=21+head.channel_count;\n    if(_len<size||head.version<=1&&_len>size)return OP_EBADHEADER;\n    head.stream_count=_data[19];\n    if(head.stream_count<1)return OP_EBADHEADER;\n    head.coupled_count=_data[20];\n    if(head.coupled_count>head.stream_count)return OP_EBADHEADER;\n    for(ci=0;ci<head.channel_count;ci++){\n      if(_data[21+ci]>=head.stream_count+head.coupled_count\n       &&_data[21+ci]!=255){\n        return OP_EBADHEADER;\n      }\n    }\n    if(_head!=NULL)memcpy(_head->mapping,_data+21,head.channel_count);\n  }\n  /*General purpose players should not attempt to play back content with\n     channel mapping family 255.*/\n  else if(head.mapping_family==255)return OP_EIMPL;\n  /*No other channel mapping families are currently defined.*/\n  else return OP_EBADHEADER;\n  if(_head!=NULL)memcpy(_head,&head,head.mapping-(unsigned char *)&head);\n  return 0;\n}\n\nvoid opus_tags_init(OpusTags *_tags){\n  memset(_tags,0,sizeof(*_tags));\n}\n\nvoid opus_tags_clear(OpusTags *_tags){\n  int ncomments;\n  int ci;\n  ncomments=_tags->comments;\n  if(_tags->user_comments!=NULL)ncomments++;\n  for(ci=ncomments;ci-->0;)_ogg_free(_tags->user_comments[ci]);\n  _ogg_free(_tags->user_comments);\n  _ogg_free(_tags->comment_lengths);\n  _ogg_free(_tags->vendor);\n}\n\n/*Ensure there's room for up to _ncomments comments.*/\nstatic int op_tags_ensure_capacity(OpusTags *_tags,size_t _ncomments){\n  char   **user_comments;\n  int     *comment_lengths;\n  int      cur_ncomments;\n  size_t   size;\n  if(OP_UNLIKELY(_ncomments>=(size_t)INT_MAX))return OP_EFAULT;\n  size=sizeof(*_tags->comment_lengths)*(_ncomments+1);\n  if(size/sizeof(*_tags->comment_lengths)!=_ncomments+1)return OP_EFAULT;\n  cur_ncomments=_tags->comments;\n  /*We only support growing.\n    Trimming requires cleaning up the allocated strings in the old space, and\n     is best handled separately if it's ever needed.*/\n  OP_ASSERT(_ncomments>=(size_t)cur_ncomments);\n  comment_lengths=_tags->comment_lengths;\n  comment_lengths=(int *)_ogg_realloc(_tags->comment_lengths,size);\n  if(OP_UNLIKELY(comment_lengths==NULL))return OP_EFAULT;\n  if(_tags->comment_lengths==NULL){\n    OP_ASSERT(cur_ncomments==0);\n    comment_lengths[cur_ncomments]=0;\n  }\n  comment_lengths[_ncomments]=comment_lengths[cur_ncomments];\n  _tags->comment_lengths=comment_lengths;\n  size=sizeof(*_tags->user_comments)*(_ncomments+1);\n  if(size/sizeof(*_tags->user_comments)!=_ncomments+1)return OP_EFAULT;\n  user_comments=(char **)_ogg_realloc(_tags->user_comments,size);\n  if(OP_UNLIKELY(user_comments==NULL))return OP_EFAULT;\n  if(_tags->user_comments==NULL){\n    OP_ASSERT(cur_ncomments==0);\n    user_comments[cur_ncomments]=NULL;\n  }\n  user_comments[_ncomments]=user_comments[cur_ncomments];\n  _tags->user_comments=user_comments;\n  return 0;\n}\n\n/*Duplicate a (possibly non-NUL terminated) string with a known length.*/\nstatic char *op_strdup_with_len(const char *_s,size_t _len){\n  size_t  size;\n  char   *ret;\n  size=sizeof(*ret)*(_len+1);\n  if(OP_UNLIKELY(size<_len))return NULL;\n  ret=(char *)_ogg_malloc(size);\n  if(OP_LIKELY(ret!=NULL)){\n    ret=(char *)memcpy(ret,_s,sizeof(*ret)*_len);\n    ret[_len]='\\0';\n  }\n  return ret;\n}\n\n/*The actual implementation of opus_tags_parse().\n  Unlike the public API, this function requires _tags to already be\n   initialized, modifies its contents before success is guaranteed, and assumes\n   the caller will clear it on error.*/\nstatic int opus_tags_parse_impl(OpusTags *_tags,\n const unsigned char *_data,size_t _len){\n  opus_uint32 count;\n  size_t      len;\n  int         ncomments;\n  int         ci;\n  len=_len;\n  if(len<8)return OP_ENOTFORMAT;\n  if(memcmp(_data,\"OpusTags\",8)!=0)return OP_ENOTFORMAT;\n  if(len<16)return OP_EBADHEADER;\n  _data+=8;\n  len-=8;\n  count=op_parse_uint32le(_data);\n  _data+=4;\n  len-=4;\n  if(count>len)return OP_EBADHEADER;\n  if(_tags!=NULL){\n    _tags->vendor=op_strdup_with_len((char *)_data,count);\n    if(_tags->vendor==NULL)return OP_EFAULT;\n  }\n  _data+=count;\n  len-=count;\n  if(len<4)return OP_EBADHEADER;\n  count=op_parse_uint32le(_data);\n  _data+=4;\n  len-=4;\n  /*Check to make sure there's minimally sufficient data left in the packet.*/\n  if(count>len>>2)return OP_EBADHEADER;\n  /*Check for overflow (the API limits this to an int).*/\n  if(count>(opus_uint32)INT_MAX-1)return OP_EFAULT;\n  if(_tags!=NULL){\n    int ret;\n    ret=op_tags_ensure_capacity(_tags,count);\n    if(ret<0)return ret;\n  }\n  ncomments=(int)count;\n  for(ci=0;ci<ncomments;ci++){\n    /*Check to make sure there's minimally sufficient data left in the packet.*/\n    if((size_t)(ncomments-ci)>len>>2)return OP_EBADHEADER;\n    count=op_parse_uint32le(_data);\n    _data+=4;\n    len-=4;\n    if(count>len)return OP_EBADHEADER;\n    /*Check for overflow (the API limits this to an int).*/\n    if(count>(opus_uint32)INT_MAX)return OP_EFAULT;\n    if(_tags!=NULL){\n      _tags->user_comments[ci]=op_strdup_with_len((char *)_data,count);\n      if(_tags->user_comments[ci]==NULL)return OP_EFAULT;\n      _tags->comment_lengths[ci]=(int)count;\n      _tags->comments=ci+1;\n      /*Needed by opus_tags_clear() if we fail before parsing the (optional)\n         binary metadata.*/\n      _tags->user_comments[ci+1]=NULL;\n    }\n    _data+=count;\n    len-=count;\n  }\n  if(len>0&&(_data[0]&1)){\n    if(len>(opus_uint32)INT_MAX)return OP_EFAULT;\n    if(_tags!=NULL){\n      _tags->user_comments[ncomments]=(char *)_ogg_malloc(len);\n      if(OP_UNLIKELY(_tags->user_comments[ncomments]==NULL))return OP_EFAULT;\n      memcpy(_tags->user_comments[ncomments],_data,len);\n      _tags->comment_lengths[ncomments]=(int)len;\n    }\n  }\n  return 0;\n}\n\nint opus_tags_parse(OpusTags *_tags,const unsigned char *_data,size_t _len){\n  if(_tags!=NULL){\n    OpusTags tags;\n    int      ret;\n    opus_tags_init(&tags);\n    ret=opus_tags_parse_impl(&tags,_data,_len);\n    if(ret<0)opus_tags_clear(&tags);\n    else *_tags=*&tags;\n    return ret;\n  }\n  else return opus_tags_parse_impl(NULL,_data,_len);\n}\n\n/*The actual implementation of opus_tags_copy().\n  Unlike the public API, this function requires _dst to already be\n   initialized, modifies its contents before success is guaranteed, and assumes\n   the caller will clear it on error.*/\nstatic int opus_tags_copy_impl(OpusTags *_dst,const OpusTags *_src){\n  char *vendor;\n  int   ncomments;\n  int   ret;\n  int   ci;\n  vendor=_src->vendor;\n  _dst->vendor=op_strdup_with_len(vendor,strlen(vendor));\n  if(OP_UNLIKELY(_dst->vendor==NULL))return OP_EFAULT;\n  ncomments=_src->comments;\n  ret=op_tags_ensure_capacity(_dst,ncomments);\n  if(OP_UNLIKELY(ret<0))return ret;\n  for(ci=0;ci<ncomments;ci++){\n    int len;\n    len=_src->comment_lengths[ci];\n    OP_ASSERT(len>=0);\n    _dst->user_comments[ci]=op_strdup_with_len(_src->user_comments[ci],len);\n    if(OP_UNLIKELY(_dst->user_comments[ci]==NULL))return OP_EFAULT;\n    _dst->comment_lengths[ci]=len;\n    _dst->comments=ci+1;\n  }\n  if(_src->comment_lengths!=NULL){\n    int len;\n    len=_src->comment_lengths[ncomments];\n    if(len>0){\n      _dst->user_comments[ncomments]=(char *)_ogg_malloc(len);\n      if(OP_UNLIKELY(_dst->user_comments[ncomments]==NULL))return OP_EFAULT;\n      memcpy(_dst->user_comments[ncomments],_src->user_comments[ncomments],len);\n      _dst->comment_lengths[ncomments]=len;\n    }\n  }\n  return 0;\n}\n\nint opus_tags_copy(OpusTags *_dst,const OpusTags *_src){\n  OpusTags dst;\n  int      ret;\n  opus_tags_init(&dst);\n  ret=opus_tags_copy_impl(&dst,_src);\n  if(OP_UNLIKELY(ret<0))opus_tags_clear(&dst);\n  else *_dst=*&dst;\n  return 0;\n}\n\nint opus_tags_add(OpusTags *_tags,const char *_tag,const char *_value){\n  char   *comment;\n  size_t  tag_len;\n  size_t  value_len;\n  int     ncomments;\n  int     ret;\n  ncomments=_tags->comments;\n  ret=op_tags_ensure_capacity(_tags,ncomments+1);\n  if(OP_UNLIKELY(ret<0))return ret;\n  tag_len=strlen(_tag);\n  value_len=strlen(_value);\n  /*+2 for '=' and '\\0'.*/\n  if(tag_len+value_len<tag_len)return OP_EFAULT;\n  if(tag_len+value_len>(size_t)INT_MAX-2)return OP_EFAULT;\n  comment=(char *)_ogg_malloc(sizeof(*comment)*(tag_len+value_len+2));\n  if(OP_UNLIKELY(comment==NULL))return OP_EFAULT;\n  memcpy(comment,_tag,sizeof(*comment)*tag_len);\n  comment[tag_len]='=';\n  memcpy(comment+tag_len+1,_value,sizeof(*comment)*(value_len+1));\n  _tags->user_comments[ncomments]=comment;\n  _tags->comment_lengths[ncomments]=(int)(tag_len+value_len+1);\n  _tags->comments=ncomments+1;\n  return 0;\n}\n\nint opus_tags_add_comment(OpusTags *_tags,const char *_comment){\n  char *comment;\n  int   comment_len;\n  int   ncomments;\n  int   ret;\n  ncomments=_tags->comments;\n  ret=op_tags_ensure_capacity(_tags,ncomments+1);\n  if(OP_UNLIKELY(ret<0))return ret;\n  comment_len=(int)strlen(_comment);\n  comment=op_strdup_with_len(_comment,comment_len);\n  if(OP_UNLIKELY(comment==NULL))return OP_EFAULT;\n  _tags->user_comments[ncomments]=comment;\n  _tags->comment_lengths[ncomments]=comment_len;\n  _tags->comments=ncomments+1;\n  return 0;\n}\n\nint opus_tags_set_binary_suffix(OpusTags *_tags,\n const unsigned char *_data,int _len){\n  unsigned char *binary_suffix_data;\n  int            ncomments;\n  int            ret;\n  if(_len<0||_len>0&&(_data==NULL||!(_data[0]&1)))return OP_EINVAL;\n  ncomments=_tags->comments;\n  ret=op_tags_ensure_capacity(_tags,ncomments);\n  if(OP_UNLIKELY(ret<0))return ret;\n  binary_suffix_data=\n   (unsigned char *)_ogg_realloc(_tags->user_comments[ncomments],_len);\n  if(OP_UNLIKELY(binary_suffix_data==NULL))return OP_EFAULT;\n  memcpy(binary_suffix_data,_data,_len);\n  _tags->user_comments[ncomments]=(char *)binary_suffix_data;\n  _tags->comment_lengths[ncomments]=_len;\n  return 0;\n}\n\nint opus_tagcompare(const char *_tag_name,const char *_comment){\n  size_t tag_len;\n  tag_len=strlen(_tag_name);\n  if(OP_UNLIKELY(tag_len>(size_t)INT_MAX))return -1;\n  return opus_tagncompare(_tag_name,(int)tag_len,_comment);\n}\n\nint opus_tagncompare(const char *_tag_name,int _tag_len,const char *_comment){\n  int ret;\n  OP_ASSERT(_tag_len>=0);\n  ret=op_strncasecmp(_tag_name,_comment,_tag_len);\n  return ret?ret:'='-_comment[_tag_len];\n}\n\nconst char *opus_tags_query(const OpusTags *_tags,const char *_tag,int _count){\n  char   **user_comments;\n  size_t   tag_len;\n  int      found;\n  int      ncomments;\n  int      ci;\n  tag_len=strlen(_tag);\n  if(OP_UNLIKELY(tag_len>(size_t)INT_MAX))return NULL;\n  ncomments=_tags->comments;\n  user_comments=_tags->user_comments;\n  found=0;\n  for(ci=0;ci<ncomments;ci++){\n    if(!opus_tagncompare(_tag,(int)tag_len,user_comments[ci])){\n      /*We return a pointer to the data, not a copy.*/\n      if(_count==found++)return user_comments[ci]+tag_len+1;\n    }\n  }\n  /*Didn't find anything.*/\n  return NULL;\n}\n\nint opus_tags_query_count(const OpusTags *_tags,const char *_tag){\n  char   **user_comments;\n  size_t   tag_len;\n  int      found;\n  int      ncomments;\n  int      ci;\n  tag_len=strlen(_tag);\n  if(OP_UNLIKELY(tag_len>(size_t)INT_MAX))return 0;\n  ncomments=_tags->comments;\n  user_comments=_tags->user_comments;\n  found=0;\n  for(ci=0;ci<ncomments;ci++){\n    if(!opus_tagncompare(_tag,(int)tag_len,user_comments[ci]))found++;\n  }\n  return found;\n}\n\nconst unsigned char *opus_tags_get_binary_suffix(const OpusTags *_tags,\n int *_len){\n  int ncomments;\n  int len;\n  ncomments=_tags->comments;\n  len=_tags->comment_lengths==NULL?0:_tags->comment_lengths[ncomments];\n  *_len=len;\n  OP_ASSERT(len==0||_tags->user_comments!=NULL);\n  return len>0?(const unsigned char *)_tags->user_comments[ncomments]:NULL;\n}\n\nstatic int opus_tags_get_gain(const OpusTags *_tags,int *_gain_q8,\n const char *_tag_name,size_t _tag_len){\n  char **comments;\n  int    ncomments;\n  int    ci;\n  comments=_tags->user_comments;\n  ncomments=_tags->comments;\n  /*Look for the first valid tag with the name _tag_name and use that.*/\n  for(ci=0;ci<ncomments;ci++){\n    OP_ASSERT(_tag_len<=(size_t)INT_MAX);\n    if(opus_tagncompare(_tag_name,(int)_tag_len,comments[ci])==0){\n      char       *p;\n      opus_int32  gain_q8;\n      int         negative;\n      p=comments[ci]+_tag_len+1;\n      negative=0;\n      if(*p=='-'){\n        negative=-1;\n        p++;\n      }\n      else if(*p=='+')p++;\n      gain_q8=0;\n      while(*p>='0'&&*p<='9'){\n        gain_q8=10*gain_q8+*p-'0';\n        if(gain_q8>32767-negative)break;\n        p++;\n      }\n      /*This didn't look like a signed 16-bit decimal integer.\n        Not a valid gain tag.*/\n      if(*p!='\\0')continue;\n      *_gain_q8=(int)(gain_q8+negative^negative);\n      return 0;\n    }\n  }\n  return OP_FALSE;\n}\n\nint opus_tags_get_album_gain(const OpusTags *_tags,int *_gain_q8){\n  return opus_tags_get_gain(_tags,_gain_q8,\"R128_ALBUM_GAIN\",15);\n}\n\nint opus_tags_get_track_gain(const OpusTags *_tags,int *_gain_q8){\n  return opus_tags_get_gain(_tags,_gain_q8,\"R128_TRACK_GAIN\",15);\n}\n\nstatic int op_is_jpeg(const unsigned char *_buf,size_t _buf_sz){\n  return _buf_sz>=11&&memcmp(_buf,\"\\xFF\\xD8\\xFF\\xE0\",4)==0\n   &&(_buf[4]<<8|_buf[5])>=16&&memcmp(_buf+6,\"JFIF\",5)==0;\n}\n\n/*Tries to extract the width, height, bits per pixel, and palette size of a\n   JPEG.\n  On failure, simply leaves its outputs unmodified.*/\nstatic void op_extract_jpeg_params(const unsigned char *_buf,size_t _buf_sz,\n opus_uint32 *_width,opus_uint32 *_height,\n opus_uint32 *_depth,opus_uint32 *_colors,int *_has_palette){\n  if(op_is_jpeg(_buf,_buf_sz)){\n    size_t offs;\n    offs=2;\n    for(;;){\n      size_t segment_len;\n      int    marker;\n      while(offs<_buf_sz&&_buf[offs]!=0xFF)offs++;\n      while(offs<_buf_sz&&_buf[offs]==0xFF)offs++;\n      marker=_buf[offs];\n      offs++;\n      /*If we hit EOI* (end of image), or another SOI* (start of image),\n         or SOS (start of scan), then stop now.*/\n      if(offs>=_buf_sz||(marker>=0xD8&&marker<=0xDA))break;\n      /*RST* (restart markers): skip (no segment length).*/\n      else if(marker>=0xD0&&marker<=0xD7)continue;\n      /*Read the length of the marker segment.*/\n      if(_buf_sz-offs<2)break;\n      segment_len=_buf[offs]<<8|_buf[offs+1];\n      if(segment_len<2||_buf_sz-offs<segment_len)break;\n      if(marker==0xC0||(marker>0xC0&&marker<0xD0&&(marker&3)!=0)){\n        /*Found a SOFn (start of frame) marker segment:*/\n        if(segment_len>=8){\n          *_height=_buf[offs+3]<<8|_buf[offs+4];\n          *_width=_buf[offs+5]<<8|_buf[offs+6];\n          *_depth=_buf[offs+2]*_buf[offs+7];\n          *_colors=0;\n          *_has_palette=0;\n        }\n        break;\n      }\n      /*Other markers: skip the whole marker segment.*/\n      offs+=segment_len;\n    }\n  }\n}\n\nstatic int op_is_png(const unsigned char *_buf,size_t _buf_sz){\n  return _buf_sz>=8&&memcmp(_buf,\"\\x89PNG\\x0D\\x0A\\x1A\\x0A\",8)==0;\n}\n\n/*Tries to extract the width, height, bits per pixel, and palette size of a\n   PNG.\n  On failure, simply leaves its outputs unmodified.*/\nstatic void op_extract_png_params(const unsigned char *_buf,size_t _buf_sz,\n opus_uint32 *_width,opus_uint32 *_height,\n opus_uint32 *_depth,opus_uint32 *_colors,int *_has_palette){\n  if(op_is_png(_buf,_buf_sz)){\n    size_t offs;\n    offs=8;\n    while(_buf_sz-offs>=12){\n      ogg_uint32_t chunk_len;\n      chunk_len=op_parse_uint32be(_buf+offs);\n      if(chunk_len>_buf_sz-(offs+12))break;\n      else if(chunk_len==13&&memcmp(_buf+offs+4,\"IHDR\",4)==0){\n        int color_type;\n        *_width=op_parse_uint32be(_buf+offs+8);\n        *_height=op_parse_uint32be(_buf+offs+12);\n        color_type=_buf[offs+17];\n        if(color_type==3){\n          *_depth=24;\n          *_has_palette=1;\n        }\n        else{\n          int sample_depth;\n          sample_depth=_buf[offs+16];\n          if(color_type==0)*_depth=sample_depth;\n          else if(color_type==2)*_depth=sample_depth*3;\n          else if(color_type==4)*_depth=sample_depth*2;\n          else if(color_type==6)*_depth=sample_depth*4;\n          *_colors=0;\n          *_has_palette=0;\n          break;\n        }\n      }\n      else if(*_has_palette>0&&memcmp(_buf+offs+4,\"PLTE\",4)==0){\n        *_colors=chunk_len/3;\n        break;\n      }\n      offs+=12+chunk_len;\n    }\n  }\n}\n\nstatic int op_is_gif(const unsigned char *_buf,size_t _buf_sz){\n  return _buf_sz>=6&&(memcmp(_buf,\"GIF87a\",6)==0||memcmp(_buf,\"GIF89a\",6)==0);\n}\n\n/*Tries to extract the width, height, bits per pixel, and palette size of a\n   GIF.\n  On failure, simply leaves its outputs unmodified.*/\nstatic void op_extract_gif_params(const unsigned char *_buf,size_t _buf_sz,\n opus_uint32 *_width,opus_uint32 *_height,\n opus_uint32 *_depth,opus_uint32 *_colors,int *_has_palette){\n  if(op_is_gif(_buf,_buf_sz)&&_buf_sz>=14){\n    *_width=_buf[6]|_buf[7]<<8;\n    *_height=_buf[8]|_buf[9]<<8;\n    /*libFLAC hard-codes the depth to 24.*/\n    *_depth=24;\n    *_colors=1<<((_buf[10]&7)+1);\n    *_has_palette=1;\n  }\n}\n\n/*The actual implementation of opus_picture_tag_parse().\n  Unlike the public API, this function requires _pic to already be\n   initialized, modifies its contents before success is guaranteed, and assumes\n   the caller will clear it on error.*/\nstatic int opus_picture_tag_parse_impl(OpusPictureTag *_pic,const char *_tag,\n unsigned char *_buf,size_t _buf_sz,size_t _base64_sz){\n  opus_int32   picture_type;\n  opus_uint32  mime_type_length;\n  char        *mime_type;\n  opus_uint32  description_length;\n  char        *description;\n  opus_uint32  width;\n  opus_uint32  height;\n  opus_uint32  depth;\n  opus_uint32  colors;\n  opus_uint32  data_length;\n  opus_uint32  file_width;\n  opus_uint32  file_height;\n  opus_uint32  file_depth;\n  opus_uint32  file_colors;\n  int          format;\n  int          has_palette;\n  int          colors_set;\n  size_t       i;\n  /*Decode the BASE64 data.*/\n  for(i=0;i<_base64_sz;i++){\n    opus_uint32 value;\n    int         j;\n    value=0;\n    for(j=0;j<4;j++){\n      unsigned c;\n      unsigned d;\n      c=(unsigned char)_tag[4*i+j];\n      if(c=='+')d=62;\n      else if(c=='/')d=63;\n      else if(c>='0'&&c<='9')d=52+c-'0';\n      else if(c>='a'&&c<='z')d=26+c-'a';\n      else if(c>='A'&&c<='Z')d=c-'A';\n      else if(c=='='&&3*i+j>_buf_sz)d=0;\n      else return OP_ENOTFORMAT;\n      value=value<<6|d;\n    }\n    _buf[3*i]=(unsigned char)(value>>16);\n    if(3*i+1<_buf_sz){\n      _buf[3*i+1]=(unsigned char)(value>>8);\n      if(3*i+2<_buf_sz)_buf[3*i+2]=(unsigned char)value;\n    }\n  }\n  i=0;\n  picture_type=op_parse_uint32be(_buf+i);\n  i+=4;\n  /*Extract the MIME type.*/\n  mime_type_length=op_parse_uint32be(_buf+i);\n  i+=4;\n  if(mime_type_length>_buf_sz-32)return OP_ENOTFORMAT;\n  mime_type=(char *)_ogg_malloc(sizeof(*_pic->mime_type)*(mime_type_length+1));\n  if(mime_type==NULL)return OP_EFAULT;\n  memcpy(mime_type,_buf+i,sizeof(*mime_type)*mime_type_length);\n  mime_type[mime_type_length]='\\0';\n  _pic->mime_type=mime_type;\n  i+=mime_type_length;\n  /*Extract the description string.*/\n  description_length=op_parse_uint32be(_buf+i);\n  i+=4;\n  if(description_length>_buf_sz-mime_type_length-32)return OP_ENOTFORMAT;\n  description=\n   (char *)_ogg_malloc(sizeof(*_pic->mime_type)*(description_length+1));\n  if(description==NULL)return OP_EFAULT;\n  memcpy(description,_buf+i,sizeof(*description)*description_length);\n  description[description_length]='\\0';\n  _pic->description=description;\n  i+=description_length;\n  /*Extract the remaining fields.*/\n  width=op_parse_uint32be(_buf+i);\n  i+=4;\n  height=op_parse_uint32be(_buf+i);\n  i+=4;\n  depth=op_parse_uint32be(_buf+i);\n  i+=4;\n  colors=op_parse_uint32be(_buf+i);\n  i+=4;\n  /*If one of these is set, they all must be, but colors==0 is a valid value.*/\n  colors_set=width!=0||height!=0||depth!=0||colors!=0;\n  if((width==0||height==0||depth==0)&&colors_set)return OP_ENOTFORMAT;\n  data_length=op_parse_uint32be(_buf+i);\n  i+=4;\n  if(data_length>_buf_sz-i)return OP_ENOTFORMAT;\n  /*Trim extraneous data so we don't copy it below.*/\n  _buf_sz=i+data_length;\n  /*Attempt to determine the image format.*/\n  format=OP_PIC_FORMAT_UNKNOWN;\n  if(mime_type_length==3&&strcmp(mime_type,\"-->\")==0){\n    format=OP_PIC_FORMAT_URL;\n    /*Picture type 1 must be a 32x32 PNG.*/\n    if(picture_type==1&&(width!=0||height!=0)&&(width!=32||height!=32)){\n      return OP_ENOTFORMAT;\n    }\n    /*Append a terminating NUL for the convenience of our callers.*/\n    _buf[_buf_sz++]='\\0';\n  }\n  else{\n    if(mime_type_length==10\n     &&op_strncasecmp(mime_type,\"image/jpeg\",mime_type_length)==0){\n      if(op_is_jpeg(_buf+i,data_length))format=OP_PIC_FORMAT_JPEG;\n    }\n    else if(mime_type_length==9\n     &&op_strncasecmp(mime_type,\"image/png\",mime_type_length)==0){\n      if(op_is_png(_buf+i,data_length))format=OP_PIC_FORMAT_PNG;\n    }\n    else if(mime_type_length==9\n     &&op_strncasecmp(mime_type,\"image/gif\",mime_type_length)==0){\n      if(op_is_gif(_buf+i,data_length))format=OP_PIC_FORMAT_GIF;\n    }\n    else if(mime_type_length==0||(mime_type_length==6\n     &&op_strncasecmp(mime_type,\"image/\",mime_type_length)==0)){\n      if(op_is_jpeg(_buf+i,data_length))format=OP_PIC_FORMAT_JPEG;\n      else if(op_is_png(_buf+i,data_length))format=OP_PIC_FORMAT_PNG;\n      else if(op_is_gif(_buf+i,data_length))format=OP_PIC_FORMAT_GIF;\n    }\n    file_width=file_height=file_depth=file_colors=0;\n    has_palette=-1;\n    switch(format){\n      case OP_PIC_FORMAT_JPEG:{\n        op_extract_jpeg_params(_buf+i,data_length,\n         &file_width,&file_height,&file_depth,&file_colors,&has_palette);\n      }break;\n      case OP_PIC_FORMAT_PNG:{\n        op_extract_png_params(_buf+i,data_length,\n         &file_width,&file_height,&file_depth,&file_colors,&has_palette);\n      }break;\n      case OP_PIC_FORMAT_GIF:{\n        op_extract_gif_params(_buf+i,data_length,\n         &file_width,&file_height,&file_depth,&file_colors,&has_palette);\n      }break;\n    }\n    if(has_palette>=0){\n      /*If we successfully extracted these parameters from the image, override\n         any declared values.*/\n      width=file_width;\n      height=file_height;\n      depth=file_depth;\n      colors=file_colors;\n    }\n    /*Picture type 1 must be a 32x32 PNG.*/\n    if(picture_type==1&&(format!=OP_PIC_FORMAT_PNG||width!=32||height!=32)){\n      return OP_ENOTFORMAT;\n    }\n  }\n  /*Adjust _buf_sz instead of using data_length to capture the terminating NUL\n     for URLs.*/\n  _buf_sz-=i;\n  memmove(_buf,_buf+i,sizeof(*_buf)*_buf_sz);\n  _buf=(unsigned char *)_ogg_realloc(_buf,_buf_sz);\n  if(_buf_sz>0&&_buf==NULL)return OP_EFAULT;\n  _pic->type=picture_type;\n  _pic->width=width;\n  _pic->height=height;\n  _pic->depth=depth;\n  _pic->colors=colors;\n  _pic->data_length=data_length;\n  _pic->data=_buf;\n  _pic->format=format;\n  return 0;\n}\n\nint opus_picture_tag_parse(OpusPictureTag *_pic,const char *_tag){\n  OpusPictureTag  pic;\n  unsigned char  *buf;\n  size_t          base64_sz;\n  size_t          buf_sz;\n  size_t          tag_length;\n  int             ret;\n  if(opus_tagncompare(\"METADATA_BLOCK_PICTURE\",22,_tag)==0)_tag+=23;\n  /*Figure out how much BASE64-encoded data we have.*/\n  tag_length=strlen(_tag);\n  if(tag_length&3)return OP_ENOTFORMAT;\n  base64_sz=tag_length>>2;\n  buf_sz=3*base64_sz;\n  if(buf_sz<32)return OP_ENOTFORMAT;\n  if(_tag[tag_length-1]=='=')buf_sz--;\n  if(_tag[tag_length-2]=='=')buf_sz--;\n  if(buf_sz<32)return OP_ENOTFORMAT;\n  /*Allocate an extra byte to allow appending a terminating NUL to URL data.*/\n  buf=(unsigned char *)_ogg_malloc(sizeof(*buf)*(buf_sz+1));\n  if(buf==NULL)return OP_EFAULT;\n  opus_picture_tag_init(&pic);\n  ret=opus_picture_tag_parse_impl(&pic,_tag,buf,buf_sz,base64_sz);\n  if(ret<0){\n    opus_picture_tag_clear(&pic);\n    _ogg_free(buf);\n  }\n  else *_pic=*&pic;\n  return ret;\n}\n\nvoid opus_picture_tag_init(OpusPictureTag *_pic){\n  memset(_pic,0,sizeof(*_pic));\n}\n\nvoid opus_picture_tag_clear(OpusPictureTag *_pic){\n  _ogg_free(_pic->description);\n  _ogg_free(_pic->mime_type);\n  _ogg_free(_pic->data);\n}\n"
  },
  {
    "path": "ThirdParty/opusfile-0.9/src/internal.c",
    "content": "/********************************************************************\n *                                                                  *\n * THIS FILE IS PART OF THE libopusfile SOFTWARE CODEC SOURCE CODE. *\n * USE, DISTRIBUTION AND REPRODUCTION OF THIS LIBRARY SOURCE IS     *\n * GOVERNED BY A BSD-STYLE SOURCE LICENSE INCLUDED WITH THIS SOURCE *\n * IN 'COPYING'. PLEASE READ THESE TERMS BEFORE DISTRIBUTING.       *\n *                                                                  *\n * THE libopusfile SOURCE CODE IS (C) COPYRIGHT 2012                *\n * by the Xiph.Org Foundation and contributors http://www.xiph.org/ *\n *                                                                  *\n ********************************************************************/\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include \"internal.h\"\n\n#if defined(OP_ENABLE_ASSERTIONS)\nvoid op_fatal_impl(const char *_str,const char *_file,int _line){\n  fprintf(stderr,\"Fatal (internal) error in %s, line %i: %s\\n\",\n   _file,_line,_str);\n  abort();\n}\n#endif\n\n/*A version of strncasecmp() that is guaranteed to only ignore the case of\n   ASCII characters.*/\nint op_strncasecmp(const char *_a,const char *_b,int _n){\n  int i;\n  for(i=0;i<_n;i++){\n    int a;\n    int b;\n    int d;\n    a=_a[i];\n    b=_b[i];\n    if(a>='a'&&a<='z')a-='a'-'A';\n    if(b>='a'&&b<='z')b-='a'-'A';\n    d=a-b;\n    if(d)return d;\n  }\n  return 0;\n}\n"
  },
  {
    "path": "ThirdParty/opusfile-0.9/src/internal.h",
    "content": "/********************************************************************\n *                                                                  *\n * THIS FILE IS PART OF THE libopusfile SOFTWARE CODEC SOURCE CODE. *\n * USE, DISTRIBUTION AND REPRODUCTION OF THIS LIBRARY SOURCE IS     *\n * GOVERNED BY A BSD-STYLE SOURCE LICENSE INCLUDED WITH THIS SOURCE *\n * IN 'COPYING'. PLEASE READ THESE TERMS BEFORE DISTRIBUTING.       *\n *                                                                  *\n * THE libopusfile SOURCE CODE IS (C) COPYRIGHT 2012                *\n * by the Xiph.Org Foundation and contributors http://www.xiph.org/ *\n *                                                                  *\n ********************************************************************/\n#if !defined(_opusfile_internal_h)\n# define _opusfile_internal_h (1)\n\n# if !defined(_REENTRANT)\n#  define _REENTRANT\n# endif\n# if !defined(_GNU_SOURCE)\n#  define _GNU_SOURCE\n# endif\n# if !defined(_LARGEFILE_SOURCE)\n#  define _LARGEFILE_SOURCE\n# endif\n# if !defined(_LARGEFILE64_SOURCE)\n#  define _LARGEFILE64_SOURCE\n# endif\n# if !defined(_FILE_OFFSET_BITS)\n#  define _FILE_OFFSET_BITS 64\n# endif\n\n# include <stdlib.h>\n# include <opusfile.h>\n\ntypedef struct OggOpusLink OggOpusLink;\n\n# if defined(OP_FIXED_POINT)\n\ntypedef opus_int16 op_sample;\n\n# else\n\ntypedef float      op_sample;\n\n/*We're using this define to test for libopus 1.1 or later until libopus\n   provides a better mechanism.*/\n#  if defined(OPUS_GET_EXPERT_FRAME_DURATION_REQUEST)\n/*Enable soft clipping prevention in 16-bit decodes.*/\n#   define OP_SOFT_CLIP (1)\n#  endif\n\n# endif\n\n# if OP_GNUC_PREREQ(4,2)\n/*Disable excessive warnings about the order of operations.*/\n#  pragma GCC diagnostic ignored \"-Wparentheses\"\n# elif defined(_MSC_VER)\n/*Disable excessive warnings about the order of operations.*/\n#  pragma warning(disable:4554)\n/*Disable warnings about \"deprecated\" POSIX functions.*/\n#  pragma warning(disable:4996)\n# endif\n\n# if OP_GNUC_PREREQ(3,0)\n/*Another alternative is\n    (__builtin_constant_p(_x)?!!(_x):__builtin_expect(!!(_x),1))\n   but that evaluates _x multiple times, which may be bad.*/\n#  define OP_LIKELY(_x) (__builtin_expect(!!(_x),1))\n#  define OP_UNLIKELY(_x) (__builtin_expect(!!(_x),0))\n# else\n#  define OP_LIKELY(_x)   (!!(_x))\n#  define OP_UNLIKELY(_x) (!!(_x))\n# endif\n\n# if defined(OP_ENABLE_ASSERTIONS)\n#  if OP_GNUC_PREREQ(2,5)||__SUNPRO_C>=0x590\n__attribute__((noreturn))\n#  endif\nvoid op_fatal_impl(const char *_str,const char *_file,int _line);\n\n#  define OP_FATAL(_str) (op_fatal_impl(_str,__FILE__,__LINE__))\n\n#  define OP_ASSERT(_cond) \\\n  do{ \\\n    if(OP_UNLIKELY(!(_cond)))OP_FATAL(\"assertion failed: \" #_cond); \\\n  } \\\n  while(0)\n#  define OP_ALWAYS_TRUE(_cond) OP_ASSERT(_cond)\n\n# else\n#  define OP_FATAL(_str) abort()\n#  define OP_ASSERT(_cond)\n#  define OP_ALWAYS_TRUE(_cond) ((void)(_cond))\n# endif\n\n# define OP_INT64_MAX (2*(((ogg_int64_t)1<<62)-1)|1)\n# define OP_INT64_MIN (-OP_INT64_MAX-1)\n# define OP_INT32_MAX (2*(((ogg_int32_t)1<<30)-1)|1)\n# define OP_INT32_MIN (-OP_INT32_MAX-1)\n\n# define OP_MIN(_a,_b)        ((_a)<(_b)?(_a):(_b))\n# define OP_MAX(_a,_b)        ((_a)>(_b)?(_a):(_b))\n# define OP_CLAMP(_lo,_x,_hi) (OP_MAX(_lo,OP_MIN(_x,_hi)))\n\n/*Advance a file offset by the given amount, clamping against OP_INT64_MAX.\n  This is used to advance a known offset by things like OP_CHUNK_SIZE or\n   OP_PAGE_SIZE_MAX, while making sure to avoid signed overflow.\n  It assumes that both _offset and _amount are non-negative.*/\n#define OP_ADV_OFFSET(_offset,_amount) \\\n (OP_MIN(_offset,OP_INT64_MAX-(_amount))+(_amount))\n\n/*The maximum channel count for any mapping we'll actually decode.*/\n# define OP_NCHANNELS_MAX (8)\n\n/*Initial state.*/\n# define  OP_NOTOPEN   (0)\n/*We've found the first Opus stream in the first link.*/\n# define  OP_PARTOPEN  (1)\n# define  OP_OPENED    (2)\n/*We've found the first Opus stream in the current link.*/\n# define  OP_STREAMSET (3)\n/*We've initialized the decoder for the chosen Opus stream in the current\n   link.*/\n# define  OP_INITSET   (4)\n\n/*Information cached for a single link in a chained Ogg Opus file.\n  We choose the first Opus stream encountered in each link to play back (and\n   require at least one).*/\nstruct OggOpusLink{\n  /*The byte offset of the first header page in this link.*/\n  opus_int64   offset;\n  /*The byte offset of the first data page from the chosen Opus stream in this\n     link (after the headers).*/\n  opus_int64   data_offset;\n  /*The byte offset of the last page from the chosen Opus stream in this link.\n    This is used when seeking to ensure we find a page before the last one, so\n     that end-trimming calculations work properly.\n    This is only valid for seekable sources.*/\n  opus_int64   end_offset;\n  /*The total duration of all prior links.\n    This is always zero for non-seekable sources.*/\n  ogg_int64_t  pcm_file_offset;\n  /*The granule position of the last sample.\n    This is only valid for seekable sources.*/\n  ogg_int64_t  pcm_end;\n  /*The granule position before the first sample.*/\n  ogg_int64_t  pcm_start;\n  /*The serial number.*/\n  ogg_uint32_t serialno;\n  /*The contents of the info header.*/\n  OpusHead     head;\n  /*The contents of the comment header.*/\n  OpusTags     tags;\n};\n\nstruct OggOpusFile{\n  /*The callbacks used to access the stream.*/\n  OpusFileCallbacks  callbacks;\n  /*A FILE *, memory buffer, etc.*/\n  void              *stream;\n  /*Whether or not we can seek with this stream.*/\n  int                seekable;\n  /*The number of links in this chained Ogg Opus file.*/\n  int                nlinks;\n  /*The cached information from each link in a chained Ogg Opus file.\n    If stream isn't seekable (e.g., it's a pipe), only the current link\n     appears.*/\n  OggOpusLink       *links;\n  /*The number of serial numbers from a single link.*/\n  int                nserialnos;\n  /*The capacity of the list of serial numbers from a single link.*/\n  int                cserialnos;\n  /*Storage for the list of serial numbers from a single link.\n    This is a scratch buffer used when scanning the BOS pages at the start of\n     each link.*/\n  ogg_uint32_t      *serialnos;\n  /*This is the current offset of the data processed by the ogg_sync_state.\n    After a seek, this should be set to the target offset so that we can track\n     the byte offsets of subsequent pages.\n    After a call to op_get_next_page(), this will point to the first byte after\n     that page.*/\n  opus_int64         offset;\n  /*The total size of this stream, or -1 if it's unseekable.*/\n  opus_int64         end;\n  /*Used to locate pages in the stream.*/\n  ogg_sync_state     oy;\n  /*One of OP_NOTOPEN, OP_PARTOPEN, OP_OPENED, OP_STREAMSET, OP_INITSET.*/\n  int                ready_state;\n  /*The current link being played back.*/\n  int                cur_link;\n  /*The number of decoded samples to discard from the start of decoding.*/\n  opus_int32         cur_discard_count;\n  /*The granule position of the previous packet (current packet start time).*/\n  ogg_int64_t        prev_packet_gp;\n  /*The stream offset of the most recent page with completed packets, or -1.\n    This is only needed to recover continued packet data in the seeking logic,\n     when we use the current position as one of our bounds, only to later\n     discover it was the correct starting point.*/\n  opus_int64         prev_page_offset;\n  /*The number of bytes read since the last bitrate query, including framing.*/\n  opus_int64         bytes_tracked;\n  /*The number of samples decoded since the last bitrate query.*/\n  ogg_int64_t        samples_tracked;\n  /*Takes physical pages and welds them into a logical stream of packets.*/\n  ogg_stream_state   os;\n  /*Re-timestamped packets from a single page.\n    Buffering these relies on the undocumented libogg behavior that ogg_packet\n     pointers remain valid until the next page is submitted to the\n     ogg_stream_state they came from.*/\n  ogg_packet         op[255];\n  /*The index of the next packet to return.*/\n  int                op_pos;\n  /*The total number of packets available.*/\n  int                op_count;\n  /*Central working state for the packet-to-PCM decoder.*/\n  OpusMSDecoder     *od;\n  /*The application-provided packet decode callback.*/\n  op_decode_cb_func  decode_cb;\n  /*The application-provided packet decode callback context.*/\n  void              *decode_cb_ctx;\n  /*The stream count used to initialize the decoder.*/\n  int                od_stream_count;\n  /*The coupled stream count used to initialize the decoder.*/\n  int                od_coupled_count;\n  /*The channel count used to initialize the decoder.*/\n  int                od_channel_count;\n  /*The channel mapping used to initialize the decoder.*/\n  unsigned char      od_mapping[OP_NCHANNELS_MAX];\n  /*The buffered data for one decoded packet.*/\n  op_sample         *od_buffer;\n  /*The current position in the decoded buffer.*/\n  int                od_buffer_pos;\n  /*The number of valid samples in the decoded buffer.*/\n  int                od_buffer_size;\n  /*The type of gain offset to apply.\n    One of OP_HEADER_GAIN, OP_ALBUM_GAIN, OP_TRACK_GAIN, or OP_ABSOLUTE_GAIN.*/\n  int                gain_type;\n  /*The offset to apply to the gain.*/\n  opus_int32         gain_offset_q8;\n  /*Internal state for soft clipping and dithering float->short output.*/\n#if !defined(OP_FIXED_POINT)\n# if defined(OP_SOFT_CLIP)\n  float              clip_state[OP_NCHANNELS_MAX];\n# endif\n  float              dither_a[OP_NCHANNELS_MAX*4];\n  float              dither_b[OP_NCHANNELS_MAX*4];\n  opus_uint32        dither_seed;\n  int                dither_mute;\n  int                dither_disabled;\n  /*The number of channels represented by the internal state.\n    This gets set to 0 whenever anything that would prevent state propagation\n     occurs (switching between the float/short APIs, or between the\n     stereo/multistream APIs).*/\n  int                state_channel_count;\n#endif\n};\n\nint op_strncasecmp(const char *_a,const char *_b,int _n);\n\n#endif\n"
  },
  {
    "path": "ThirdParty/opusfile-0.9/src/opusfile.c",
    "content": "/********************************************************************\n *                                                                  *\n * THIS FILE IS PART OF THE libopusfile SOFTWARE CODEC SOURCE CODE. *\n * USE, DISTRIBUTION AND REPRODUCTION OF THIS LIBRARY SOURCE IS     *\n * GOVERNED BY A BSD-STYLE SOURCE LICENSE INCLUDED WITH THIS SOURCE *\n * IN 'COPYING'. PLEASE READ THESE TERMS BEFORE DISTRIBUTING.       *\n *                                                                  *\n * THE libopusfile SOURCE CODE IS (C) COPYRIGHT 1994-2012           *\n * by the Xiph.Org Foundation and contributors http://www.xiph.org/ *\n *                                                                  *\n ********************************************************************\n\n function: stdio-based convenience library for opening/seeking/decoding\n last mod: $Id: vorbisfile.c 17573 2010-10-27 14:53:59Z xiphmont $\n\n ********************************************************************/\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include \"internal.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <errno.h>\n#include <limits.h>\n#include <string.h>\n#include <math.h>\n\n#include \"opusfile.h\"\n\n/*This implementation is largely based off of libvorbisfile.\n  All of the Ogg bits work roughly the same, though I have made some\n   \"improvements\" that have not been folded back there, yet.*/\n\n/*A 'chained bitstream' is an Ogg Opus bitstream that contains more than one\n   logical bitstream arranged end to end (the only form of Ogg multiplexing\n   supported by this library.\n  Grouping (parallel multiplexing) is not supported, except to the extent that\n   if there are multiple logical Ogg streams in a single link of the chain, we\n   will ignore all but the first Opus stream we find.*/\n\n/*An Ogg Opus file can be played beginning to end (streamed) without worrying\n   ahead of time about chaining (see opusdec from the opus-tools package).\n  If we have the whole file, however, and want random access\n   (seeking/scrubbing) or desire to know the total length/time of a file, we\n   need to account for the possibility of chaining.*/\n\n/*We can handle things a number of ways.\n  We can determine the entire bitstream structure right off the bat, or find\n   pieces on demand.\n  This library determines and caches structure for the entire bitstream, but\n   builds a virtual decoder on the fly when moving between links in the chain.*/\n\n/*There are also different ways to implement seeking.\n  Enough information exists in an Ogg bitstream to seek to sample-granularity\n   positions in the output.\n  Or, one can seek by picking some portion of the stream roughly in the desired\n   area if we only want coarse navigation through the stream.\n  We implement and expose both strategies.*/\n\n/*The maximum number of bytes in a page (including the page headers).*/\n#define OP_PAGE_SIZE_MAX  (65307)\n/*The default amount to seek backwards per step when trying to find the\n   previous page.\n  This must be at least as large as the maximum size of a page.*/\n#define OP_CHUNK_SIZE     (65536)\n/*The maximum amount to seek backwards per step when trying to find the\n   previous page.*/\n#define OP_CHUNK_SIZE_MAX (1024*(opus_int32)1024)\n/*A smaller read size is needed for low-rate streaming.*/\n#define OP_READ_SIZE      (2048)\n\nint op_test(OpusHead *_head,\n const unsigned char *_initial_data,size_t _initial_bytes){\n  ogg_sync_state  oy;\n  char           *data;\n  int             err;\n  /*The first page of a normal Opus file will be at most 57 bytes (27 Ogg\n     page header bytes + 1 lacing value + 21 Opus header bytes + 8 channel\n     mapping bytes).\n    It will be at least 47 bytes (27 Ogg page header bytes + 1 lacing value +\n     19 Opus header bytes using channel mapping family 0).\n    If we don't have at least that much data, give up now.*/\n  if(_initial_bytes<47)return OP_FALSE;\n  /*Only proceed if we start with the magic OggS string.\n    This is to prevent us spending a lot of time allocating memory and looking\n     for Ogg pages in non-Ogg files.*/\n  if(memcmp(_initial_data,\"OggS\",4)!=0)return OP_ENOTFORMAT;\n  if(OP_UNLIKELY(_initial_bytes>(size_t)LONG_MAX))return OP_EFAULT;\n  ogg_sync_init(&oy);\n  data=ogg_sync_buffer(&oy,(long)_initial_bytes);\n  if(data!=NULL){\n    ogg_stream_state os;\n    ogg_page         og;\n    int              ret;\n    memcpy(data,_initial_data,_initial_bytes);\n    ogg_sync_wrote(&oy,(long)_initial_bytes);\n    ogg_stream_init(&os,-1);\n    err=OP_FALSE;\n    do{\n      ogg_packet op;\n      ret=ogg_sync_pageout(&oy,&og);\n      /*Ignore holes.*/\n      if(ret<0)continue;\n      /*Stop if we run out of data.*/\n      if(!ret)break;\n      ogg_stream_reset_serialno(&os,ogg_page_serialno(&og));\n      ogg_stream_pagein(&os,&og);\n      /*Only process the first packet on this page (if it's a BOS packet,\n         it's required to be the only one).*/\n      if(ogg_stream_packetout(&os,&op)==1){\n        if(op.b_o_s){\n          ret=opus_head_parse(_head,op.packet,op.bytes);\n          /*If this didn't look like Opus, keep going.*/\n          if(ret==OP_ENOTFORMAT)continue;\n          /*Otherwise we're done, one way or another.*/\n          err=ret;\n        }\n        /*We finished parsing the headers.\n          There is no Opus to be found.*/\n        else err=OP_ENOTFORMAT;\n      }\n    }\n    while(err==OP_FALSE);\n    ogg_stream_clear(&os);\n  }\n  else err=OP_EFAULT;\n  ogg_sync_clear(&oy);\n  return err;\n}\n\n/*Many, many internal helpers.\n  The intention is not to be confusing.\n  Rampant duplication and monolithic function implementation (though we do have\n   some large, omnibus functions still) would be harder to understand anyway.\n  The high level functions are last.\n  Begin grokking near the end of the file if you prefer to read things\n   top-down.*/\n\n/*The read/seek functions track absolute position within the stream.*/\n\n/*Read a little more data from the file/pipe into the ogg_sync framer.\n  _nbytes: The maximum number of bytes to read.\n  Return: A positive number of bytes read on success, 0 on end-of-file, or a\n           negative value on failure.*/\nstatic int op_get_data(OggOpusFile *_of,int _nbytes){\n  unsigned char *buffer;\n  int            nbytes;\n  OP_ASSERT(_nbytes>0);\n  buffer=(unsigned char *)ogg_sync_buffer(&_of->oy,_nbytes);\n  nbytes=(int)(*_of->callbacks.read)(_of->stream,buffer,_nbytes);\n  OP_ASSERT(nbytes<=_nbytes);\n  if(OP_LIKELY(nbytes>0))ogg_sync_wrote(&_of->oy,nbytes);\n  return nbytes;\n}\n\n/*Save a tiny smidge of verbosity to make the code more readable.*/\nstatic int op_seek_helper(OggOpusFile *_of,opus_int64 _offset){\n  if(_offset==_of->offset)return 0;\n  if(_of->callbacks.seek==NULL\n   ||(*_of->callbacks.seek)(_of->stream,_offset,SEEK_SET)){\n    return OP_EREAD;\n  }\n  _of->offset=_offset;\n  ogg_sync_reset(&_of->oy);\n  return 0;\n}\n\n/*Get the current position indicator of the underlying stream.\n  This should be the same as the value reported by tell().*/\nstatic opus_int64 op_position(const OggOpusFile *_of){\n  /*The current position indicator is _not_ simply offset.\n    We may also have unprocessed, buffered data in the sync state.*/\n  return _of->offset+_of->oy.fill-_of->oy.returned;\n}\n\n/*From the head of the stream, get the next page.\n  _boundary specifies if the function is allowed to fetch more data from the\n   stream (and how much) or only use internally buffered data.\n  _boundary: -1: Unbounded search.\n              0: Read no additional data.\n                 Use only cached data.\n              n: Search for the start of a new page up to file position n.\n  Return: n>=0:       Found a page at absolute offset n.\n          OP_FALSE:   Hit the _boundary limit.\n          OP_EREAD:   An underlying read operation failed.\n          OP_BADLINK: We hit end-of-file before reaching _boundary.*/\nstatic opus_int64 op_get_next_page(OggOpusFile *_of,ogg_page *_og,\n opus_int64 _boundary){\n  while(_boundary<=0||_of->offset<_boundary){\n    int more;\n    more=ogg_sync_pageseek(&_of->oy,_og);\n    /*Skipped (-more) bytes.*/\n    if(OP_UNLIKELY(more<0))_of->offset-=more;\n    else if(more==0){\n      int read_nbytes;\n      int ret;\n      /*Send more paramedics.*/\n      if(!_boundary)return OP_FALSE;\n      if(_boundary<0)read_nbytes=OP_READ_SIZE;\n      else{\n        opus_int64 position;\n        position=op_position(_of);\n        if(position>=_boundary)return OP_FALSE;\n        read_nbytes=(int)OP_MIN(_boundary-position,OP_READ_SIZE);\n      }\n      ret=op_get_data(_of,read_nbytes);\n      if(OP_UNLIKELY(ret<0))return OP_EREAD;\n      if(OP_UNLIKELY(ret==0)){\n        /*Only fail cleanly on EOF if we didn't have a known boundary.\n          Otherwise, we should have been able to reach that boundary, and this\n           is a fatal error.*/\n        return OP_UNLIKELY(_boundary<0)?OP_FALSE:OP_EBADLINK;\n      }\n    }\n    else{\n      /*Got a page.\n        Return the page start offset and advance the internal offset past the\n         page end.*/\n      opus_int64 page_offset;\n      page_offset=_of->offset;\n      _of->offset+=more;\n      OP_ASSERT(page_offset>=0);\n      return page_offset;\n    }\n  }\n  return OP_FALSE;\n}\n\nstatic int op_add_serialno(const ogg_page *_og,\n ogg_uint32_t **_serialnos,int *_nserialnos,int *_cserialnos){\n  ogg_uint32_t *serialnos;\n  int           nserialnos;\n  int           cserialnos;\n  ogg_uint32_t s;\n  s=ogg_page_serialno(_og);\n  serialnos=*_serialnos;\n  nserialnos=*_nserialnos;\n  cserialnos=*_cserialnos;\n  if(OP_UNLIKELY(nserialnos>=cserialnos)){\n    if(OP_UNLIKELY(cserialnos>INT_MAX/(int)sizeof(*serialnos)-1>>1)){\n      return OP_EFAULT;\n    }\n    cserialnos=2*cserialnos+1;\n    OP_ASSERT(nserialnos<cserialnos);\n    serialnos=(ogg_uint32_t *)_ogg_realloc(serialnos,\n     sizeof(*serialnos)*cserialnos);\n    if(OP_UNLIKELY(serialnos==NULL))return OP_EFAULT;\n  }\n  serialnos[nserialnos++]=s;\n  *_serialnos=serialnos;\n  *_nserialnos=nserialnos;\n  *_cserialnos=cserialnos;\n  return 0;\n}\n\n/*Returns nonzero if found.*/\nstatic int op_lookup_serialno(ogg_uint32_t _s,\n const ogg_uint32_t *_serialnos,int _nserialnos){\n  int i;\n  for(i=0;i<_nserialnos&&_serialnos[i]!=_s;i++);\n  return i<_nserialnos;\n}\n\nstatic int op_lookup_page_serialno(const ogg_page *_og,\n const ogg_uint32_t *_serialnos,int _nserialnos){\n  return op_lookup_serialno(ogg_page_serialno(_og),_serialnos,_nserialnos);\n}\n\ntypedef struct OpusSeekRecord OpusSeekRecord;\n\n/*We use this to remember the pages we found while enumerating the links of a\n   chained stream.\n  We keep track of the starting and ending offsets, as well as the point we\n   started searching from, so we know where to bisect.\n  We also keep the serial number, so we can tell if the page belonged to the\n   current link or not, as well as the granule position, to aid in estimating\n   the start of the link.*/\nstruct OpusSeekRecord{\n  /*The earliest byte we know of such that reading forward from it causes\n     capture to be regained at this page.*/\n  opus_int64   search_start;\n  /*The offset of this page.*/\n  opus_int64   offset;\n  /*The size of this page.*/\n  opus_int32   size;\n  /*The serial number of this page.*/\n  ogg_uint32_t serialno;\n  /*The granule position of this page.*/\n  ogg_int64_t  gp;\n};\n\n/*Find the last page beginning before _offset with a valid granule position.\n  There is no '_boundary' parameter as it will always have to read more data.\n  This is much dirtier than the above, as Ogg doesn't have any backward search\n   linkage.\n  This search prefers pages of the specified serial number.\n  If a page of the specified serial number is spotted during the\n   seek-back-and-read-forward, it will return the info of last page of the\n   matching serial number, instead of the very last page, unless the very last\n   page belongs to a different link than preferred serial number.\n  If no page of the specified serial number is seen, it will return the info of\n   the last page.\n  [out] _sr:   Returns information about the page that was found on success.\n  _offset:     The _offset before which to find a page.\n               Any page returned will consist of data entirely before _offset.\n  _serialno:   The preferred serial number.\n               If a page with this serial number is found, it will be returned\n                even if another page in the same link is found closer to\n                _offset.\n               This is purely opportunistic: there is no guarantee such a page\n                will be found if it exists.\n  _serialnos:  The list of serial numbers in the link that contains the\n                preferred serial number.\n  _nserialnos: The number of serial numbers in the current link.\n  Return: 0 on success, or a negative value on failure.\n          OP_EREAD:    Failed to read more data (error or EOF).\n          OP_EBADLINK: We couldn't find a page even after seeking back to the\n                        start of the stream.*/\nstatic int op_get_prev_page_serial(OggOpusFile *_of,OpusSeekRecord *_sr,\n opus_int64 _offset,ogg_uint32_t _serialno,\n const ogg_uint32_t *_serialnos,int _nserialnos){\n  OpusSeekRecord preferred_sr;\n  ogg_page       og;\n  opus_int64     begin;\n  opus_int64     end;\n  opus_int64     original_end;\n  opus_int32     chunk_size;\n  int            preferred_found;\n  original_end=end=begin=_offset;\n  preferred_found=0;\n  _offset=-1;\n  chunk_size=OP_CHUNK_SIZE;\n  do{\n    opus_int64 search_start;\n    int        ret;\n    OP_ASSERT(chunk_size>=OP_PAGE_SIZE_MAX);\n    begin=OP_MAX(begin-chunk_size,0);\n    ret=op_seek_helper(_of,begin);\n    if(OP_UNLIKELY(ret<0))return ret;\n    search_start=begin;\n    while(_of->offset<end){\n      opus_int64   llret;\n      ogg_uint32_t serialno;\n      llret=op_get_next_page(_of,&og,end);\n      if(OP_UNLIKELY(llret<OP_FALSE))return (int)llret;\n      else if(llret==OP_FALSE)break;\n      serialno=ogg_page_serialno(&og);\n      /*Save the information for this page.\n        We're not interested in the page itself... just the serial number, byte\n         offset, page size, and granule position.*/\n      _sr->search_start=search_start;\n      _sr->offset=_offset=llret;\n      _sr->serialno=serialno;\n      OP_ASSERT(_of->offset-_offset>=0);\n      OP_ASSERT(_of->offset-_offset<=OP_PAGE_SIZE_MAX);\n      _sr->size=(opus_int32)(_of->offset-_offset);\n      _sr->gp=ogg_page_granulepos(&og);\n      /*If this page is from the stream we're looking for, remember it.*/\n      if(serialno==_serialno){\n        preferred_found=1;\n        *&preferred_sr=*_sr;\n      }\n      if(!op_lookup_serialno(serialno,_serialnos,_nserialnos)){\n        /*We fell off the end of the link, which means we seeked back too far\n           and shouldn't have been looking in that link to begin with.\n          If we found the preferred serial number, forget that we saw it.*/\n        preferred_found=0;\n      }\n      search_start=llret+1;\n    }\n    /*We started from the beginning of the stream and found nothing.\n      This should be impossible unless the contents of the stream changed out\n       from under us after we read from it.*/\n    if(OP_UNLIKELY(!begin)&&OP_UNLIKELY(_offset<0))return OP_EBADLINK;\n    /*Bump up the chunk size.\n      This is mildly helpful when seeks are very expensive (http).*/\n    chunk_size=OP_MIN(2*chunk_size,OP_CHUNK_SIZE_MAX);\n    /*Avoid quadratic complexity if we hit an invalid patch of the file.*/\n    end=OP_MIN(begin+OP_PAGE_SIZE_MAX-1,original_end);\n  }\n  while(_offset<0);\n  if(preferred_found)*_sr=*&preferred_sr;\n  return 0;\n}\n\n/*Find the last page beginning before _offset with the given serial number and\n   a valid granule position.\n  Unlike the above search, this continues until it finds such a page, but does\n   not stray outside the current link.\n  We could implement it (inefficiently) by calling op_get_prev_page_serial()\n   repeatedly until it returned a page that had both our preferred serial\n   number and a valid granule position, but doing it with a separate function\n   allows us to avoid repeatedly re-scanning valid pages from other streams as\n   we seek-back-and-read-forward.\n  [out] _gp:   Returns the granule position of the page that was found on\n                success.\n  _offset:     The _offset before which to find a page.\n               Any page returned will consist of data entirely before _offset.\n  _serialno:   The target serial number.\n  _serialnos:  The list of serial numbers in the link that contains the\n                preferred serial number.\n  _nserialnos: The number of serial numbers in the current link.\n  Return: The offset of the page on success, or a negative value on failure.\n          OP_EREAD:    Failed to read more data (error or EOF).\n          OP_EBADLINK: We couldn't find a page even after seeking back past the\n                        beginning of the link.*/\nstatic opus_int64 op_get_last_page(OggOpusFile *_of,ogg_int64_t *_gp,\n opus_int64 _offset,ogg_uint32_t _serialno,\n const ogg_uint32_t *_serialnos,int _nserialnos){\n  ogg_page    og;\n  ogg_int64_t gp;\n  opus_int64  begin;\n  opus_int64  end;\n  opus_int64  original_end;\n  opus_int32  chunk_size;\n  /*The target serial number must belong to the current link.*/\n  OP_ASSERT(op_lookup_serialno(_serialno,_serialnos,_nserialnos));\n  original_end=end=begin=_offset;\n  _offset=-1;\n  /*We shouldn't have to initialize gp, but gcc is too dumb to figure out that\n     ret>=0 implies we entered the if(page_gp!=-1) block at least once.*/\n  gp=-1;\n  chunk_size=OP_CHUNK_SIZE;\n  do{\n    int left_link;\n    int ret;\n    OP_ASSERT(chunk_size>=OP_PAGE_SIZE_MAX);\n    begin=OP_MAX(begin-chunk_size,0);\n    ret=op_seek_helper(_of,begin);\n    if(OP_UNLIKELY(ret<0))return ret;\n    left_link=0;\n    while(_of->offset<end){\n      opus_int64   llret;\n      ogg_uint32_t serialno;\n      llret=op_get_next_page(_of,&og,end);\n      if(OP_UNLIKELY(llret<OP_FALSE))return llret;\n      else if(llret==OP_FALSE)break;\n      serialno=ogg_page_serialno(&og);\n      if(serialno==_serialno){\n        ogg_int64_t page_gp;\n        /*The page is from the right stream...*/\n        page_gp=ogg_page_granulepos(&og);\n        if(page_gp!=-1){\n          /*And has a valid granule position.\n            Let's remember it.*/\n          _offset=llret;\n          gp=page_gp;\n        }\n      }\n      else if(OP_UNLIKELY(!op_lookup_serialno(serialno,\n       _serialnos,_nserialnos))){\n        /*We fell off the start of the link, which means we don't need to keep\n           seeking any farther back.*/\n        left_link=1;\n      }\n    }\n    /*We started from at or before the beginning of the link and found nothing.\n      This should be impossible unless the contents of the stream changed out\n       from under us after we read from it.*/\n    if((OP_UNLIKELY(left_link)||OP_UNLIKELY(!begin))&&OP_UNLIKELY(_offset<0)){\n      return OP_EBADLINK;\n    }\n    /*Bump up the chunk size.\n      This is mildly helpful when seeks are very expensive (http).*/\n    chunk_size=OP_MIN(2*chunk_size,OP_CHUNK_SIZE_MAX);\n    /*Avoid quadratic complexity if we hit an invalid patch of the file.*/\n    end=OP_MIN(begin+OP_PAGE_SIZE_MAX-1,original_end);\n  }\n  while(_offset<0);\n  *_gp=gp;\n  return _offset;\n}\n\n/*Uses the local ogg_stream storage in _of.\n  This is important for non-streaming input sources.*/\nstatic int op_fetch_headers_impl(OggOpusFile *_of,OpusHead *_head,\n OpusTags *_tags,ogg_uint32_t **_serialnos,int *_nserialnos,\n int *_cserialnos,ogg_page *_og){\n  ogg_packet op;\n  int        ret;\n  if(_serialnos!=NULL)*_nserialnos=0;\n  /*Extract the serialnos of all BOS pages plus the first set of Opus headers\n     we see in the link.*/\n  while(ogg_page_bos(_og)){\n    if(_serialnos!=NULL){\n      if(OP_UNLIKELY(op_lookup_page_serialno(_og,*_serialnos,*_nserialnos))){\n        /*A dupe serialnumber in an initial header packet set==invalid stream.*/\n        return OP_EBADHEADER;\n      }\n      ret=op_add_serialno(_og,_serialnos,_nserialnos,_cserialnos);\n      if(OP_UNLIKELY(ret<0))return ret;\n    }\n    if(_of->ready_state<OP_STREAMSET){\n      /*We don't have an Opus stream in this link yet, so begin prospective\n         stream setup.\n        We need a stream to get packets.*/\n      ogg_stream_reset_serialno(&_of->os,ogg_page_serialno(_og));\n      ogg_stream_pagein(&_of->os,_og);\n      if(OP_LIKELY(ogg_stream_packetout(&_of->os,&op)>0)){\n        ret=opus_head_parse(_head,op.packet,op.bytes);\n        /*Found a valid Opus header.\n          Continue setup.*/\n        if(OP_LIKELY(ret>=0))_of->ready_state=OP_STREAMSET;\n        /*If it's just a stream type we don't recognize, ignore it.\n          Everything else is fatal.*/\n        else if(ret!=OP_ENOTFORMAT)return ret;\n      }\n      /*TODO: Should a BOS page with no packets be an error?*/\n    }\n    /*Get the next page.\n      No need to clamp the boundary offset against _of->end, as all errors\n       become OP_ENOTFORMAT or OP_EBADHEADER.*/\n    if(OP_UNLIKELY(op_get_next_page(_of,_og,\n     OP_ADV_OFFSET(_of->offset,OP_CHUNK_SIZE))<0)){\n      return _of->ready_state<OP_STREAMSET?OP_ENOTFORMAT:OP_EBADHEADER;\n    }\n  }\n  if(OP_UNLIKELY(_of->ready_state!=OP_STREAMSET))return OP_ENOTFORMAT;\n  /*If the first non-header page belonged to our Opus stream, submit it.*/\n  if(_of->os.serialno==ogg_page_serialno(_og))ogg_stream_pagein(&_of->os,_og);\n  /*Loop getting packets.*/\n  for(;;){\n    switch(ogg_stream_packetout(&_of->os,&op)){\n      case 0:{\n        /*Loop getting pages.*/\n        for(;;){\n          /*No need to clamp the boundary offset against _of->end, as all\n             errors become OP_EBADHEADER.*/\n          if(OP_UNLIKELY(op_get_next_page(_of,_og,\n           OP_ADV_OFFSET(_of->offset,OP_CHUNK_SIZE))<0)){\n            return OP_EBADHEADER;\n          }\n          /*If this page belongs to the correct stream, go parse it.*/\n          if(_of->os.serialno==ogg_page_serialno(_og)){\n            ogg_stream_pagein(&_of->os,_og);\n            break;\n          }\n          /*If the link ends before we see the Opus comment header, abort.*/\n          if(OP_UNLIKELY(ogg_page_bos(_og)))return OP_EBADHEADER;\n          /*Otherwise, keep looking.*/\n        }\n      }break;\n      /*We shouldn't get a hole in the headers!*/\n      case -1:return OP_EBADHEADER;\n      default:{\n        /*Got a packet.\n          It should be the comment header.*/\n        ret=opus_tags_parse(_tags,op.packet,op.bytes);\n        if(OP_UNLIKELY(ret<0))return ret;\n        /*Make sure the page terminated at the end of the comment header.\n          If there is another packet on the page, or part of a packet, then\n           reject the stream.\n          Otherwise seekable sources won't be able to seek back to the start\n           properly.*/\n        ret=ogg_stream_packetout(&_of->os,&op);\n        if(OP_UNLIKELY(ret!=0)\n         ||OP_UNLIKELY(_og->header[_og->header_len-1]==255)){\n          /*If we fail, the caller assumes our tags are uninitialized.*/\n          opus_tags_clear(_tags);\n          return OP_EBADHEADER;\n        }\n        return 0;\n      }\n    }\n  }\n}\n\nstatic int op_fetch_headers(OggOpusFile *_of,OpusHead *_head,\n OpusTags *_tags,ogg_uint32_t **_serialnos,int *_nserialnos,\n int *_cserialnos,ogg_page *_og){\n  ogg_page og;\n  int      ret;\n  if(!_og){\n    /*No need to clamp the boundary offset against _of->end, as all errors\n       become OP_ENOTFORMAT.*/\n    if(OP_UNLIKELY(op_get_next_page(_of,&og,\n     OP_ADV_OFFSET(_of->offset,OP_CHUNK_SIZE))<0)){\n      return OP_ENOTFORMAT;\n    }\n    _og=&og;\n  }\n  _of->ready_state=OP_OPENED;\n  ret=op_fetch_headers_impl(_of,_head,_tags,_serialnos,_nserialnos,\n   _cserialnos,_og);\n  /*Revert back from OP_STREAMSET to OP_OPENED on failure, to prevent\n     double-free of the tags in an unseekable stream.*/\n  if(OP_UNLIKELY(ret<0))_of->ready_state=OP_OPENED;\n  return ret;\n}\n\n/*Granule position manipulation routines.\n  A granule position is defined to be an unsigned 64-bit integer, with the\n   special value -1 in two's complement indicating an unset or invalid granule\n   position.\n  We are not guaranteed to have an unsigned 64-bit type, so we construct the\n   following routines that\n   a) Properly order negative numbers as larger than positive numbers, and\n   b) Check for underflow or overflow past the special -1 value.\n  This lets us operate on the full, valid range of granule positions in a\n   consistent and safe manner.\n  This full range is organized into distinct regions:\n   [ -1 (invalid) ][ 0 ... OP_INT64_MAX ][ OP_INT64_MIN ... -2 ][-1 (invalid) ]\n\n  No one should actually use granule positions so large that they're negative,\n   even if they are technically valid, as very little software handles them\n   correctly (including most of Xiph.Org's).\n  This library also refuses to support durations so large they won't fit in a\n   signed 64-bit integer (to avoid exposing this mess to the application, and\n   to simplify a good deal of internal arithmetic), so the only way to use them\n   successfully is if pcm_start is very large.\n  This means there isn't anything you can do with negative granule positions\n   that you couldn't have done with purely non-negative ones.\n  The main purpose of these routines is to allow us to think very explicitly\n   about the possible failure cases of all granule position manipulations.*/\n\n/*Safely adds a small signed integer to a valid (not -1) granule position.\n  The result can use the full 64-bit range of values (both positive and\n   negative), but will fail on overflow (wrapping past -1; wrapping past\n   OP_INT64_MAX is explicitly okay).\n  [out] _dst_gp: The resulting granule position.\n                 Only modified on success.\n  _src_gp:       The granule position to add to.\n                 This must not be -1.\n  _delta:        The amount to add.\n                 This is allowed to be up to 32 bits to support the maximum\n                  duration of a single Ogg page (255 packets * 120 ms per\n                  packet == 1,468,800 samples at 48 kHz).\n  Return: 0 on success, or OP_EINVAL if the result would wrap around past -1.*/\nstatic int op_granpos_add(ogg_int64_t *_dst_gp,ogg_int64_t _src_gp,\n opus_int32 _delta){\n  /*The code below handles this case correctly, but there's no reason we\n     should ever be called with these values, so make sure we aren't.*/\n  OP_ASSERT(_src_gp!=-1);\n  if(_delta>0){\n    /*Adding this amount to the granule position would overflow its 64-bit\n       range.*/\n    if(OP_UNLIKELY(_src_gp<0)&&OP_UNLIKELY(_src_gp>=-1-_delta))return OP_EINVAL;\n    if(OP_UNLIKELY(_src_gp>OP_INT64_MAX-_delta)){\n      /*Adding this amount to the granule position would overflow the positive\n         half of its 64-bit range.\n        Since signed overflow is undefined in C, do it in a way the compiler\n         isn't allowed to screw up.*/\n      _delta-=(opus_int32)(OP_INT64_MAX-_src_gp)+1;\n      _src_gp=OP_INT64_MIN;\n    }\n  }\n  else if(_delta<0){\n    /*Subtracting this amount from the granule position would underflow its\n       64-bit range.*/\n    if(_src_gp>=0&&OP_UNLIKELY(_src_gp<-_delta))return OP_EINVAL;\n    if(OP_UNLIKELY(_src_gp<OP_INT64_MIN-_delta)){\n      /*Subtracting this amount from the granule position would underflow the\n         negative half of its 64-bit range.\n        Since signed underflow is undefined in C, do it in a way the compiler\n         isn't allowed to screw up.*/\n      _delta+=(opus_int32)(_src_gp-OP_INT64_MIN)+1;\n      _src_gp=OP_INT64_MAX;\n    }\n  }\n  *_dst_gp=_src_gp+_delta;\n  return 0;\n}\n\n/*Safely computes the difference between two granule positions.\n  The difference must fit in a signed 64-bit integer, or the function fails.\n  It correctly handles the case where the granule position has wrapped around\n   from positive values to negative ones.\n  [out] _delta: The difference between the granule positions.\n                Only modified on success.\n  _gp_a:        The granule position to subtract from.\n                This must not be -1.\n  _gp_b:        The granule position to subtract.\n                This must not be -1.\n  Return: 0 on success, or OP_EINVAL if the result would not fit in a signed\n           64-bit integer.*/\nstatic int op_granpos_diff(ogg_int64_t *_delta,\n ogg_int64_t _gp_a,ogg_int64_t _gp_b){\n  int gp_a_negative;\n  int gp_b_negative;\n  /*The code below handles these cases correctly, but there's no reason we\n     should ever be called with these values, so make sure we aren't.*/\n  OP_ASSERT(_gp_a!=-1);\n  OP_ASSERT(_gp_b!=-1);\n  gp_a_negative=OP_UNLIKELY(_gp_a<0);\n  gp_b_negative=OP_UNLIKELY(_gp_b<0);\n  if(OP_UNLIKELY(gp_a_negative^gp_b_negative)){\n    ogg_int64_t da;\n    ogg_int64_t db;\n    if(gp_a_negative){\n      /*_gp_a has wrapped to a negative value but _gp_b hasn't: the difference\n         should be positive.*/\n      /*Step 1: Handle wrapping.*/\n      /*_gp_a < 0 => da < 0.*/\n      da=(OP_INT64_MIN-_gp_a)-1;\n      /*_gp_b >= 0  => db >= 0.*/\n      db=OP_INT64_MAX-_gp_b;\n      /*Step 2: Check for overflow.*/\n      if(OP_UNLIKELY(OP_INT64_MAX+da<db))return OP_EINVAL;\n      *_delta=db-da;\n    }\n    else{\n      /*_gp_b has wrapped to a negative value but _gp_a hasn't: the difference\n         should be negative.*/\n      /*Step 1: Handle wrapping.*/\n      /*_gp_a >= 0 => da <= 0*/\n      da=_gp_a+OP_INT64_MIN;\n      /*_gp_b < 0 => db <= 0*/\n      db=OP_INT64_MIN-_gp_b;\n      /*Step 2: Check for overflow.*/\n      if(OP_UNLIKELY(da<OP_INT64_MIN-db))return OP_EINVAL;\n      *_delta=da+db;\n    }\n  }\n  else *_delta=_gp_a-_gp_b;\n  return 0;\n}\n\nstatic int op_granpos_cmp(ogg_int64_t _gp_a,ogg_int64_t _gp_b){\n  /*The invalid granule position -1 should behave like NaN: neither greater\n     than nor less than any other granule position, nor equal to any other\n     granule position, including itself.\n    However, that means there isn't anything we could sensibly return from this\n     function for it.*/\n  OP_ASSERT(_gp_a!=-1);\n  OP_ASSERT(_gp_b!=-1);\n  /*Handle the wrapping cases.*/\n  if(OP_UNLIKELY(_gp_a<0)){\n    if(_gp_b>=0)return 1;\n    /*Else fall through.*/\n  }\n  else if(OP_UNLIKELY(_gp_b<0))return -1;\n  /*No wrapping case.*/\n  return (_gp_a>_gp_b)-(_gp_b>_gp_a);\n}\n\n/*Returns the duration of the packet (in samples at 48 kHz), or a negative\n   value on error.*/\nstatic int op_get_packet_duration(const unsigned char *_data,int _len){\n  int nframes;\n  int frame_size;\n  int nsamples;\n  nframes=opus_packet_get_nb_frames(_data,_len);\n  if(OP_UNLIKELY(nframes<0))return OP_EBADPACKET;\n  frame_size=opus_packet_get_samples_per_frame(_data,48000);\n  nsamples=nframes*frame_size;\n  if(OP_UNLIKELY(nsamples>120*48))return OP_EBADPACKET;\n  return nsamples;\n}\n\n/*This function more properly belongs in info.c, but we define it here to allow\n   the static granule position manipulation functions to remain static.*/\nogg_int64_t opus_granule_sample(const OpusHead *_head,ogg_int64_t _gp){\n  opus_int32 pre_skip;\n  pre_skip=_head->pre_skip;\n  if(_gp!=-1&&op_granpos_add(&_gp,_gp,-pre_skip))_gp=-1;\n  return _gp;\n}\n\n/*Grab all the packets currently in the stream state, and compute their\n   durations.\n  _of->op_count is set to the number of packets collected.\n  [out] _durations: Returns the durations of the individual packets.\n  Return: The total duration of all packets, or OP_HOLE if there was a hole.*/\nstatic opus_int32 op_collect_audio_packets(OggOpusFile *_of,\n int _durations[255]){\n  opus_int32 total_duration;\n  int        op_count;\n  /*Count the durations of all packets in the page.*/\n  op_count=0;\n  total_duration=0;\n  for(;;){\n    int ret;\n    /*This takes advantage of undocumented libogg behavior that returned\n       ogg_packet buffers are valid at least until the next page is\n       submitted.\n      Relying on this is not too terrible, as _none_ of the Ogg memory\n       ownership/lifetime rules are well-documented.\n      But I can read its code and know this will work.*/\n    ret=ogg_stream_packetout(&_of->os,_of->op+op_count);\n    if(!ret)break;\n    if(OP_UNLIKELY(ret<0)){\n      /*We shouldn't get holes in the middle of pages.*/\n      OP_ASSERT(op_count==0);\n      /*Set the return value and break out of the loop.\n        We want to make sure op_count gets set to 0, because we've ingested a\n         page, so any previously loaded packets are now invalid.*/\n      total_duration=OP_HOLE;\n      break;\n    }\n    /*Unless libogg is broken, we can't get more than 255 packets from a\n       single page.*/\n    OP_ASSERT(op_count<255);\n    _durations[op_count]=op_get_packet_duration(_of->op[op_count].packet,\n     _of->op[op_count].bytes);\n    if(OP_LIKELY(_durations[op_count]>0)){\n      /*With at most 255 packets on a page, this can't overflow.*/\n      total_duration+=_durations[op_count++];\n    }\n    /*Ignore packets with an invalid TOC sequence.*/\n    else if(op_count>0){\n      /*But save the granule position, if there was one.*/\n      _of->op[op_count-1].granulepos=_of->op[op_count].granulepos;\n    }\n  }\n  _of->op_pos=0;\n  _of->op_count=op_count;\n  return total_duration;\n}\n\n/*Starting from current cursor position, get the initial PCM offset of the next\n   page.\n  This also validates the granule position on the first page with a completed\n   audio data packet, as required by the spec.\n  If this link is completely empty (no pages with completed packets), then this\n   function sets pcm_start=pcm_end=0 and returns the BOS page of the next link\n   (if any).\n  In the seekable case, we initialize pcm_end=-1 before calling this function,\n   so that later we can detect that the link was empty before calling\n   op_find_final_pcm_offset().\n  [inout] _link: The link for which to find pcm_start.\n  [out] _og:     Returns the BOS page of the next link if this link was empty.\n                 In the unseekable case, we can then feed this to\n                  op_fetch_headers() to start the next link.\n                 The caller may pass NULL (e.g., for seekable streams), in\n                  which case this page will be discarded.\n  Return: 0 on success, 1 if there is a buffered BOS page available, or a\n           negative value on unrecoverable error.*/\nstatic int op_find_initial_pcm_offset(OggOpusFile *_of,\n OggOpusLink *_link,ogg_page *_og){\n  ogg_page     og;\n  opus_int64   page_offset;\n  ogg_int64_t  pcm_start;\n  ogg_int64_t  prev_packet_gp;\n  ogg_int64_t  cur_page_gp;\n  ogg_uint32_t serialno;\n  opus_int32   total_duration;\n  int          durations[255];\n  int          cur_page_eos;\n  int          op_count;\n  int          pi;\n  if(_og==NULL)_og=&og;\n  serialno=_of->os.serialno;\n  op_count=0;\n  /*We shouldn't have to initialize total_duration, but gcc is too dumb to\n     figure out that op_count>0 implies we've been through the whole loop at\n     least once.*/\n  total_duration=0;\n  do{\n    page_offset=op_get_next_page(_of,_og,_of->end);\n    /*We should get a page unless the file is truncated or mangled.\n      Otherwise there are no audio data packets in the whole logical stream.*/\n    if(OP_UNLIKELY(page_offset<0)){\n      /*Fail if there was a read error.*/\n      if(page_offset<OP_FALSE)return (int)page_offset;\n      /*Fail if the pre-skip is non-zero, since it's asking us to skip more\n         samples than exist.*/\n      if(_link->head.pre_skip>0)return OP_EBADTIMESTAMP;\n      _link->pcm_file_offset=0;\n      /*Set pcm_end and end_offset so we can skip the call to\n         op_find_final_pcm_offset().*/\n      _link->pcm_start=_link->pcm_end=0;\n      _link->end_offset=_link->data_offset;\n      return 0;\n    }\n    /*Similarly, if we hit the next link in the chain, we've gone too far.*/\n    if(OP_UNLIKELY(ogg_page_bos(_og))){\n      if(_link->head.pre_skip>0)return OP_EBADTIMESTAMP;\n      /*Set pcm_end and end_offset so we can skip the call to\n         op_find_final_pcm_offset().*/\n      _link->pcm_file_offset=0;\n      _link->pcm_start=_link->pcm_end=0;\n      _link->end_offset=_link->data_offset;\n      /*Tell the caller we've got a buffered page for them.*/\n      return 1;\n    }\n    /*Ignore pages from other streams (not strictly necessary, because of the\n       checks in ogg_stream_pagein(), but saves some work).*/\n    if(serialno!=(ogg_uint32_t)ogg_page_serialno(_og))continue;\n    ogg_stream_pagein(&_of->os,_og);\n    /*Bitrate tracking: add the header's bytes here.\n      The body bytes are counted when we consume the packets.*/\n    _of->bytes_tracked+=_og->header_len;\n    /*Count the durations of all packets in the page.*/\n    do total_duration=op_collect_audio_packets(_of,durations);\n    /*Ignore holes.*/\n    while(OP_UNLIKELY(total_duration<0));\n    op_count=_of->op_count;\n  }\n  while(op_count<=0);\n  /*We found the first page with a completed audio data packet: actually look\n     at the granule position.\n    RFC 3533 says, \"A special value of -1 (in two's complement) indicates that\n     no packets finish on this page,\" which does not say that a granule\n     position that is NOT -1 indicates that some packets DO finish on that page\n     (even though this was the intention, libogg itself violated this intention\n     for years before we fixed it).\n    The Ogg Opus specification only imposes its start-time requirements\n     on the granule position of the first page with completed packets,\n     so we ignore any set granule positions until then.*/\n  cur_page_gp=_of->op[op_count-1].granulepos;\n  /*But getting a packet without a valid granule position on the page is not\n     okay.*/\n  if(cur_page_gp==-1)return OP_EBADTIMESTAMP;\n  cur_page_eos=_of->op[op_count-1].e_o_s;\n  if(OP_LIKELY(!cur_page_eos)){\n    /*The EOS flag wasn't set.\n      Work backwards from the provided granule position to get the starting PCM\n       offset.*/\n    if(OP_UNLIKELY(op_granpos_add(&pcm_start,cur_page_gp,-total_duration)<0)){\n      /*The starting granule position MUST not be smaller than the amount of\n         audio on the first page with completed packets.*/\n      return OP_EBADTIMESTAMP;\n    }\n  }\n  else{\n    /*The first page with completed packets was also the last.*/\n    if(OP_LIKELY(op_granpos_add(&pcm_start,cur_page_gp,-total_duration)<0)){\n      /*If there's less audio on the page than indicated by the granule\n         position, then we're doing end-trimming, and the starting PCM offset\n         is zero by spec mandate.*/\n      pcm_start=0;\n      /*However, the end-trimming MUST not ask us to trim more samples than\n         exist after applying the pre-skip.*/\n      if(OP_UNLIKELY(op_granpos_cmp(cur_page_gp,_link->head.pre_skip)<0)){\n        return OP_EBADTIMESTAMP;\n      }\n    }\n  }\n  /*Timestamp the individual packets.*/\n  prev_packet_gp=pcm_start;\n  for(pi=0;pi<op_count;pi++){\n    if(cur_page_eos){\n      ogg_int64_t diff;\n      OP_ALWAYS_TRUE(!op_granpos_diff(&diff,cur_page_gp,prev_packet_gp));\n      diff=durations[pi]-diff;\n      /*If we have samples to trim...*/\n      if(diff>0){\n        /*If we trimmed the entire packet, stop (the spec says encoders\n           shouldn't do this, but we support it anyway).*/\n        if(OP_UNLIKELY(diff>durations[pi]))break;\n        _of->op[pi].granulepos=prev_packet_gp=cur_page_gp;\n        /*Move the EOS flag to this packet, if necessary, so we'll trim the\n           samples.*/\n        _of->op[pi].e_o_s=1;\n        continue;\n      }\n    }\n    /*Update the granule position as normal.*/\n    OP_ALWAYS_TRUE(!op_granpos_add(&_of->op[pi].granulepos,\n     prev_packet_gp,durations[pi]));\n    prev_packet_gp=_of->op[pi].granulepos;\n  }\n  /*Update the packet count after end-trimming.*/\n  _of->op_count=pi;\n  _of->cur_discard_count=_link->head.pre_skip;\n  _link->pcm_file_offset=0;\n  _of->prev_packet_gp=_link->pcm_start=pcm_start;\n  _of->prev_page_offset=page_offset;\n  return 0;\n}\n\n/*Starting from current cursor position, get the final PCM offset of the\n   previous page.\n  This also validates the duration of the link, which, while not strictly\n   required by the spec, we need to ensure duration calculations don't\n   overflow.\n  This is only done for seekable sources.\n  We must validate that op_find_initial_pcm_offset() succeeded for this link\n   before calling this function, otherwise it will scan the entire stream\n   backwards until it reaches the start, and then fail.*/\nstatic int op_find_final_pcm_offset(OggOpusFile *_of,\n const ogg_uint32_t *_serialnos,int _nserialnos,OggOpusLink *_link,\n opus_int64 _offset,ogg_uint32_t _end_serialno,ogg_int64_t _end_gp,\n ogg_int64_t *_total_duration){\n  ogg_int64_t  total_duration;\n  ogg_int64_t  duration;\n  ogg_uint32_t cur_serialno;\n  /*For the time being, fetch end PCM offset the simple way.*/\n  cur_serialno=_link->serialno;\n  if(_end_serialno!=cur_serialno||_end_gp==-1){\n    _offset=op_get_last_page(_of,&_end_gp,_offset,\n     cur_serialno,_serialnos,_nserialnos);\n    if(OP_UNLIKELY(_offset<0))return (int)_offset;\n  }\n  /*At worst we should have found the first page with completed packets.*/\n  if(OP_UNLIKELY(_offset<_link->data_offset))return OP_EBADLINK;\n  /*This implementation requires that the difference between the first and last\n     granule positions in each link be representable in a signed, 64-bit\n     number, and that each link also have at least as many samples as the\n     pre-skip requires.*/\n  if(OP_UNLIKELY(op_granpos_diff(&duration,_end_gp,_link->pcm_start)<0)\n   ||OP_UNLIKELY(duration<_link->head.pre_skip)){\n    return OP_EBADTIMESTAMP;\n  }\n  /*We also require that the total duration be representable in a signed,\n     64-bit number.*/\n  duration-=_link->head.pre_skip;\n  total_duration=*_total_duration;\n  if(OP_UNLIKELY(OP_INT64_MAX-duration<total_duration))return OP_EBADTIMESTAMP;\n  *_total_duration=total_duration+duration;\n  _link->pcm_end=_end_gp;\n  _link->end_offset=_offset;\n  return 0;\n}\n\n/*Rescale the number _x from the range [0,_from] to [0,_to].\n  _from and _to must be positive.*/\nstatic opus_int64 op_rescale64(opus_int64 _x,opus_int64 _from,opus_int64 _to){\n  opus_int64 frac;\n  opus_int64 ret;\n  int        i;\n  if(_x>=_from)return _to;\n  if(_x<=0)return 0;\n  frac=0;\n  for(i=0;i<63;i++){\n    frac<<=1;\n    OP_ASSERT(_x<=_from);\n    if(_x>=_from>>1){\n      _x-=_from-_x;\n      frac|=1;\n    }\n    else _x<<=1;\n  }\n  ret=0;\n  for(i=0;i<63;i++){\n    if(frac&1)ret=(ret&_to&1)+(ret>>1)+(_to>>1);\n    else ret>>=1;\n    frac>>=1;\n  }\n  return ret;\n}\n\n/*The minimum granule position spacing allowed for making predictions.\n  This corresponds to about 1 second of audio at 48 kHz for both Opus and\n   Vorbis, or one keyframe interval in Theora with the default keyframe spacing\n   of 256.*/\n#define OP_GP_SPACING_MIN (48000)\n\n/*Try to estimate the location of the next link using the current seek\n   records, assuming the initial granule position of any streams we've found is\n   0.*/\nstatic opus_int64 op_predict_link_start(const OpusSeekRecord *_sr,int _nsr,\n opus_int64 _searched,opus_int64 _end_searched,opus_int32 _bias){\n  opus_int64 bisect;\n  int        sri;\n  int        srj;\n  /*Require that we be at least OP_CHUNK_SIZE from the end.\n    We don't require that we be at least OP_CHUNK_SIZE from the beginning,\n     because if we are we'll just scan forward without seeking.*/\n  _end_searched-=OP_CHUNK_SIZE;\n  if(_searched>=_end_searched)return -1;\n  bisect=_end_searched;\n  for(sri=0;sri<_nsr;sri++){\n    ogg_int64_t  gp1;\n    ogg_int64_t  gp2_min;\n    ogg_uint32_t serialno1;\n    opus_int64   offset1;\n    /*If the granule position is negative, either it's invalid or we'd cause\n       overflow.*/\n    gp1=_sr[sri].gp;\n    if(gp1<0)continue;\n    /*We require some minimum distance between granule positions to make an\n       estimate.\n      We don't actually know what granule position scheme is being used,\n       because we have no idea what kind of stream these came from.\n      Therefore we require a minimum spacing between them, with the\n       expectation that while bitrates and granule position increments might\n       vary locally in quite complex ways, they are globally smooth.*/\n    if(OP_UNLIKELY(op_granpos_add(&gp2_min,gp1,OP_GP_SPACING_MIN)<0)){\n      /*No granule position would satisfy us.*/\n      continue;\n    }\n    offset1=_sr[sri].offset;\n    serialno1=_sr[sri].serialno;\n    for(srj=sri;srj-->0;){\n      ogg_int64_t gp2;\n      opus_int64  offset2;\n      opus_int64  num;\n      ogg_int64_t den;\n      ogg_int64_t ipart;\n      gp2=_sr[srj].gp;\n      if(gp2<gp2_min)continue;\n      /*Oh, and also make sure these came from the same stream.*/\n      if(_sr[srj].serialno!=serialno1)continue;\n      offset2=_sr[srj].offset;\n      /*For once, we can subtract with impunity.*/\n      den=gp2-gp1;\n      ipart=gp2/den;\n      num=offset2-offset1;\n      OP_ASSERT(num>0);\n      if(ipart>0&&(offset2-_searched)/ipart<num)continue;\n      offset2-=ipart*num;\n      gp2-=ipart*den;\n      offset2-=op_rescale64(gp2,den,num)-_bias;\n      if(offset2<_searched)continue;\n      bisect=OP_MIN(bisect,offset2);\n      break;\n    }\n  }\n  return bisect>=_end_searched?-1:bisect;\n}\n\n/*Finds each bitstream link, one at a time, using a bisection search.\n  This has to begin by knowing the offset of the first link's initial page.*/\nstatic int op_bisect_forward_serialno(OggOpusFile *_of,\n opus_int64 _searched,OpusSeekRecord *_sr,int _csr,\n ogg_uint32_t **_serialnos,int *_nserialnos,int *_cserialnos){\n  ogg_page      og;\n  OggOpusLink  *links;\n  int           nlinks;\n  int           clinks;\n  ogg_uint32_t *serialnos;\n  int           nserialnos;\n  ogg_int64_t   total_duration;\n  int           nsr;\n  int           ret;\n  links=_of->links;\n  nlinks=clinks=_of->nlinks;\n  total_duration=0;\n  /*We start with one seek record, for the last page in the file.\n    We build up a list of records for places we seek to during link\n     enumeration.\n    This list is kept sorted in reverse order.\n    We only care about seek locations that were _not_ in the current link,\n     therefore we can add them one at a time to the end of the list as we\n     improve the lower bound on the location where the next link starts.*/\n  nsr=1;\n  for(;;){\n    opus_int64  end_searched;\n    opus_int64  bisect;\n    opus_int64  next;\n    opus_int64  last;\n    ogg_int64_t end_offset;\n    ogg_int64_t end_gp;\n    int         sri;\n    serialnos=*_serialnos;\n    nserialnos=*_nserialnos;\n    if(OP_UNLIKELY(nlinks>=clinks)){\n      if(OP_UNLIKELY(clinks>INT_MAX-1>>1))return OP_EFAULT;\n      clinks=2*clinks+1;\n      OP_ASSERT(nlinks<clinks);\n      links=(OggOpusLink *)_ogg_realloc(links,sizeof(*links)*clinks);\n      if(OP_UNLIKELY(links==NULL))return OP_EFAULT;\n      _of->links=links;\n    }\n    /*Invariants:\n      We have the headers and serial numbers for the link beginning at 'begin'.\n      We have the offset and granule position of the last page in the file\n       (potentially not a page we care about).*/\n    /*Scan the seek records we already have to save us some bisection.*/\n    for(sri=0;sri<nsr;sri++){\n      if(op_lookup_serialno(_sr[sri].serialno,serialnos,nserialnos))break;\n    }\n    /*Is the last page in our current list of serial numbers?*/\n    if(sri<=0)break;\n    /*Last page wasn't found.\n      We have at least one more link.*/\n    last=-1;\n    end_searched=_sr[sri-1].search_start;\n    next=_sr[sri-1].offset;\n    end_gp=-1;\n    if(sri<nsr){\n      _searched=_sr[sri].offset+_sr[sri].size;\n      if(_sr[sri].serialno==links[nlinks-1].serialno){\n        end_gp=_sr[sri].gp;\n        end_offset=_sr[sri].offset;\n      }\n    }\n    nsr=sri;\n    bisect=-1;\n    /*If we've already found the end of at least one link, try to pick the\n       first bisection point at twice the average link size.\n      This is a good choice for files with lots of links that are all about the\n       same size.*/\n    if(nlinks>1){\n      opus_int64 last_offset;\n      opus_int64 avg_link_size;\n      opus_int64 upper_limit;\n      last_offset=links[nlinks-1].offset;\n      avg_link_size=last_offset/(nlinks-1);\n      upper_limit=end_searched-OP_CHUNK_SIZE-avg_link_size;\n      if(OP_LIKELY(last_offset>_searched-avg_link_size)\n       &&OP_LIKELY(last_offset<upper_limit)){\n        bisect=last_offset+avg_link_size;\n        if(OP_LIKELY(bisect<upper_limit))bisect+=avg_link_size;\n      }\n    }\n    /*We guard against garbage separating the last and first pages of two\n       links below.*/\n    while(_searched<end_searched){\n      opus_int32 next_bias;\n      /*If we don't have a better estimate, use simple bisection.*/\n      if(bisect==-1)bisect=_searched+(end_searched-_searched>>1);\n      /*If we're within OP_CHUNK_SIZE of the start, scan forward.*/\n      if(bisect-_searched<OP_CHUNK_SIZE)bisect=_searched;\n      /*Otherwise we're skipping data.\n        Forget the end page, if we saw one, as we might miss a later one.*/\n      else end_gp=-1;\n      ret=op_seek_helper(_of,bisect);\n      if(OP_UNLIKELY(ret<0))return ret;\n      last=op_get_next_page(_of,&og,_sr[nsr-1].offset);\n      if(OP_UNLIKELY(last<OP_FALSE))return (int)last;\n      next_bias=0;\n      if(last==OP_FALSE)end_searched=bisect;\n      else{\n        ogg_uint32_t serialno;\n        ogg_int64_t  gp;\n        serialno=ogg_page_serialno(&og);\n        gp=ogg_page_granulepos(&og);\n        if(!op_lookup_serialno(serialno,serialnos,nserialnos)){\n          end_searched=bisect;\n          next=last;\n          /*In reality we should always have enough room, but be paranoid.*/\n          if(OP_LIKELY(nsr<_csr)){\n            _sr[nsr].search_start=bisect;\n            _sr[nsr].offset=last;\n            OP_ASSERT(_of->offset-last>=0);\n            OP_ASSERT(_of->offset-last<=OP_PAGE_SIZE_MAX);\n            _sr[nsr].size=(opus_int32)(_of->offset-last);\n            _sr[nsr].serialno=serialno;\n            _sr[nsr].gp=gp;\n            nsr++;\n          }\n        }\n        else{\n          _searched=_of->offset;\n          next_bias=OP_CHUNK_SIZE;\n          if(serialno==links[nlinks-1].serialno){\n            /*This page was from the stream we want, remember it.\n              If it's the last such page in the link, we won't have to go back\n               looking for it later.*/\n            end_gp=gp;\n            end_offset=last;\n          }\n        }\n      }\n      bisect=op_predict_link_start(_sr,nsr,_searched,end_searched,next_bias);\n    }\n    /*Bisection point found.\n      Get the final granule position of the previous link, assuming\n       op_find_initial_pcm_offset() didn't already determine the link was\n       empty.*/\n    if(OP_LIKELY(links[nlinks-1].pcm_end==-1)){\n      if(end_gp==-1){\n        /*If we don't know where the end page is, we'll have to seek back and\n           look for it, starting from the end of the link.*/\n        end_offset=next;\n        /*Also forget the last page we read.\n          It won't be available after the seek.*/\n        last=-1;\n      }\n      ret=op_find_final_pcm_offset(_of,serialnos,nserialnos,\n       links+nlinks-1,end_offset,links[nlinks-1].serialno,end_gp,\n       &total_duration);\n      if(OP_UNLIKELY(ret<0))return ret;\n    }\n    if(last!=next){\n      /*The last page we read was not the first page the next link.\n        Move the cursor position to the offset of that first page.\n        This only performs an actual seek if the first page of the next link\n         does not start at the end of the last page from the current Opus\n         stream with a valid granule position.*/\n      ret=op_seek_helper(_of,next);\n      if(OP_UNLIKELY(ret<0))return ret;\n    }\n    ret=op_fetch_headers(_of,&links[nlinks].head,&links[nlinks].tags,\n     _serialnos,_nserialnos,_cserialnos,last!=next?NULL:&og);\n    if(OP_UNLIKELY(ret<0))return ret;\n    links[nlinks].offset=next;\n    links[nlinks].data_offset=_of->offset;\n    links[nlinks].serialno=_of->os.serialno;\n    links[nlinks].pcm_end=-1;\n    /*This might consume a page from the next link, however the next bisection\n       always starts with a seek.*/\n    ret=op_find_initial_pcm_offset(_of,links+nlinks,NULL);\n    if(OP_UNLIKELY(ret<0))return ret;\n    links[nlinks].pcm_file_offset=total_duration;\n    _searched=_of->offset;\n    /*Mark the current link count so it can be cleaned up on error.*/\n    _of->nlinks=++nlinks;\n  }\n  /*Last page is in the starting serialno list, so we've reached the last link.\n    Now find the last granule position for it (if we didn't the first time we\n     looked at the end of the stream, and if op_find_initial_pcm_offset()\n     didn't already determine the link was empty).*/\n  if(OP_LIKELY(links[nlinks-1].pcm_end==-1)){\n    ret=op_find_final_pcm_offset(_of,serialnos,nserialnos,\n     links+nlinks-1,_sr[0].offset,_sr[0].serialno,_sr[0].gp,&total_duration);\n    if(OP_UNLIKELY(ret<0))return ret;\n  }\n  /*Trim back the links array if necessary.*/\n  links=(OggOpusLink *)_ogg_realloc(links,sizeof(*links)*nlinks);\n  if(OP_LIKELY(links!=NULL))_of->links=links;\n  /*We also don't need these anymore.*/\n  _ogg_free(*_serialnos);\n  *_serialnos=NULL;\n  *_cserialnos=*_nserialnos=0;\n  return 0;\n}\n\nstatic void op_update_gain(OggOpusFile *_of){\n  OpusHead   *head;\n  opus_int32  gain_q8;\n  int         li;\n  /*If decode isn't ready, then we'll apply the gain when we initialize the\n     decoder.*/\n  if(_of->ready_state<OP_INITSET)return;\n  gain_q8=_of->gain_offset_q8;\n  li=_of->seekable?_of->cur_link:0;\n  head=&_of->links[li].head;\n  /*We don't have to worry about overflow here because the header gain and\n     track gain must lie in the range [-32768,32767], and the user-supplied\n     offset has been pre-clamped to [-98302,98303].*/\n  switch(_of->gain_type){\n    case OP_ALBUM_GAIN:{\n      int album_gain_q8;\n      album_gain_q8=0;\n      opus_tags_get_album_gain(&_of->links[li].tags,&album_gain_q8);\n      gain_q8+=album_gain_q8;\n      gain_q8+=head->output_gain;\n    }break;\n    case OP_TRACK_GAIN:{\n      int track_gain_q8;\n      track_gain_q8=0;\n      opus_tags_get_track_gain(&_of->links[li].tags,&track_gain_q8);\n      gain_q8+=track_gain_q8;\n      gain_q8+=head->output_gain;\n    }break;\n    case OP_HEADER_GAIN:gain_q8+=head->output_gain;break;\n    case OP_ABSOLUTE_GAIN:break;\n    default:OP_ASSERT(0);\n  }\n  gain_q8=OP_CLAMP(-32768,gain_q8,32767);\n  OP_ASSERT(_of->od!=NULL);\n#if defined(OPUS_SET_GAIN)\n  opus_multistream_decoder_ctl(_of->od,OPUS_SET_GAIN(gain_q8));\n#else\n/*A fallback that works with both float and fixed-point is a bunch of work,\n   so just force people to use a sufficiently new version.\n  This is deployed well enough at this point that this shouldn't be a burden.*/\n# error \"libopus 1.0.1 or later required\"\n#endif\n}\n\nstatic int op_make_decode_ready(OggOpusFile *_of){\n  const OpusHead *head;\n  int             li;\n  int             stream_count;\n  int             coupled_count;\n  int             channel_count;\n  if(_of->ready_state>OP_STREAMSET)return 0;\n  if(OP_UNLIKELY(_of->ready_state<OP_STREAMSET))return OP_EFAULT;\n  li=_of->seekable?_of->cur_link:0;\n  head=&_of->links[li].head;\n  stream_count=head->stream_count;\n  coupled_count=head->coupled_count;\n  channel_count=head->channel_count;\n  /*Check to see if the current decoder is compatible with the current link.*/\n  if(_of->od!=NULL&&_of->od_stream_count==stream_count\n   &&_of->od_coupled_count==coupled_count&&_of->od_channel_count==channel_count\n   &&memcmp(_of->od_mapping,head->mapping,\n   sizeof(*head->mapping)*channel_count)==0){\n    opus_multistream_decoder_ctl(_of->od,OPUS_RESET_STATE);\n  }\n  else{\n    int err;\n    opus_multistream_decoder_destroy(_of->od);\n    _of->od=opus_multistream_decoder_create(48000,channel_count,\n     stream_count,coupled_count,head->mapping,&err);\n    if(_of->od==NULL)return OP_EFAULT;\n    _of->od_stream_count=stream_count;\n    _of->od_coupled_count=coupled_count;\n    _of->od_channel_count=channel_count;\n    memcpy(_of->od_mapping,head->mapping,sizeof(*head->mapping)*channel_count);\n  }\n  _of->ready_state=OP_INITSET;\n  _of->bytes_tracked=0;\n  _of->samples_tracked=0;\n#if !defined(OP_FIXED_POINT)\n  _of->state_channel_count=0;\n  /*Use the serial number for the PRNG seed to get repeatable output for\n     straight play-throughs.*/\n  _of->dither_seed=_of->links[li].serialno;\n#endif\n  op_update_gain(_of);\n  return 0;\n}\n\nstatic int op_open_seekable2_impl(OggOpusFile *_of){\n  /*64 seek records should be enough for anybody.\n    Actually, with a bisection search in a 63-bit range down to OP_CHUNK_SIZE\n     granularity, much more than enough.*/\n  OpusSeekRecord sr[64];\n  opus_int64     data_offset;\n  int            ret;\n  /*We can seek, so set out learning all about this file.*/\n  (*_of->callbacks.seek)(_of->stream,0,SEEK_END);\n  _of->offset=_of->end=(*_of->callbacks.tell)(_of->stream);\n  if(OP_UNLIKELY(_of->end<0))return OP_EREAD;\n  data_offset=_of->links[0].data_offset;\n  if(OP_UNLIKELY(_of->end<data_offset))return OP_EBADLINK;\n  /*Get the offset of the last page of the physical bitstream, or, if we're\n     lucky, the last Opus page of the first link, as most Ogg Opus files will\n     contain a single logical bitstream.*/\n  ret=op_get_prev_page_serial(_of,sr,_of->end,\n   _of->links[0].serialno,_of->serialnos,_of->nserialnos);\n  if(OP_UNLIKELY(ret<0))return ret;\n  /*If there's any trailing junk, forget about it.*/\n  _of->end=sr[0].offset+sr[0].size;\n  if(OP_UNLIKELY(_of->end<data_offset))return OP_EBADLINK;\n  /*Now enumerate the bitstream structure.*/\n  return op_bisect_forward_serialno(_of,data_offset,sr,sizeof(sr)/sizeof(*sr),\n   &_of->serialnos,&_of->nserialnos,&_of->cserialnos);\n}\n\nstatic int op_open_seekable2(OggOpusFile *_of){\n  ogg_sync_state    oy_start;\n  ogg_stream_state  os_start;\n  ogg_packet       *op_start;\n  opus_int64        prev_page_offset;\n  opus_int64        start_offset;\n  int               start_op_count;\n  int               ret;\n  /*We're partially open and have a first link header state in storage in _of.\n    Save off that stream state so we can come back to it.\n    It would be simpler to just dump all this state and seek back to\n     links[0].data_offset when we're done.\n    But we do the extra work to allow us to seek back to _exactly_ the same\n     stream position we're at now.\n    This allows, e.g., the HTTP backend to continue reading from the original\n     connection (if it's still available), instead of opening a new one.\n    This means we can open and start playing a normal Opus file with a single\n     link and reasonable packet sizes using only two HTTP requests.*/\n  start_op_count=_of->op_count;\n  /*This is a bit too large to put on the stack unconditionally.*/\n  op_start=(ogg_packet *)_ogg_malloc(sizeof(*op_start)*start_op_count);\n  if(op_start==NULL)return OP_EFAULT;\n  *&oy_start=_of->oy;\n  *&os_start=_of->os;\n  prev_page_offset=_of->prev_page_offset;\n  start_offset=_of->offset;\n  memcpy(op_start,_of->op,sizeof(*op_start)*start_op_count);\n  OP_ASSERT((*_of->callbacks.tell)(_of->stream)==op_position(_of));\n  ogg_sync_init(&_of->oy);\n  ogg_stream_init(&_of->os,-1);\n  ret=op_open_seekable2_impl(_of);\n  /*Restore the old stream state.*/\n  ogg_stream_clear(&_of->os);\n  ogg_sync_clear(&_of->oy);\n  *&_of->oy=*&oy_start;\n  *&_of->os=*&os_start;\n  _of->offset=start_offset;\n  _of->op_count=start_op_count;\n  memcpy(_of->op,op_start,sizeof(*_of->op)*start_op_count);\n  _ogg_free(op_start);\n  _of->prev_packet_gp=_of->links[0].pcm_start;\n  _of->prev_page_offset=prev_page_offset;\n  _of->cur_discard_count=_of->links[0].head.pre_skip;\n  if(OP_UNLIKELY(ret<0))return ret;\n  /*And restore the position indicator.*/\n  ret=(*_of->callbacks.seek)(_of->stream,op_position(_of),SEEK_SET);\n  return OP_UNLIKELY(ret<0)?OP_EREAD:0;\n}\n\n/*Clear out the current logical bitstream decoder.*/\nstatic void op_decode_clear(OggOpusFile *_of){\n  /*We don't actually free the decoder.\n    We might be able to re-use it for the next link.*/\n  _of->op_count=0;\n  _of->od_buffer_size=0;\n  _of->prev_packet_gp=-1;\n  _of->prev_page_offset=-1;\n  if(!_of->seekable){\n    OP_ASSERT(_of->ready_state>=OP_INITSET);\n    opus_tags_clear(&_of->links[0].tags);\n  }\n  _of->ready_state=OP_OPENED;\n}\n\nstatic void op_clear(OggOpusFile *_of){\n  OggOpusLink *links;\n  _ogg_free(_of->od_buffer);\n  if(_of->od!=NULL)opus_multistream_decoder_destroy(_of->od);\n  links=_of->links;\n  if(!_of->seekable){\n    if(_of->ready_state>OP_OPENED||_of->ready_state==OP_PARTOPEN){\n      opus_tags_clear(&links[0].tags);\n    }\n  }\n  else if(OP_LIKELY(links!=NULL)){\n    int nlinks;\n    int link;\n    nlinks=_of->nlinks;\n    for(link=0;link<nlinks;link++)opus_tags_clear(&links[link].tags);\n  }\n  _ogg_free(links);\n  _ogg_free(_of->serialnos);\n  ogg_stream_clear(&_of->os);\n  ogg_sync_clear(&_of->oy);\n  if(_of->callbacks.close!=NULL)(*_of->callbacks.close)(_of->stream);\n}\n\nstatic int op_open1(OggOpusFile *_of,\n void *_stream,const OpusFileCallbacks *_cb,\n const unsigned char *_initial_data,size_t _initial_bytes){\n  ogg_page  og;\n  ogg_page *pog;\n  int       seekable;\n  int       ret;\n  memset(_of,0,sizeof(*_of));\n  if(OP_UNLIKELY(_initial_bytes>(size_t)LONG_MAX))return OP_EFAULT;\n  _of->end=-1;\n  _of->stream=_stream;\n  *&_of->callbacks=*_cb;\n  /*At a minimum, we need to be able to read data.*/\n  if(OP_UNLIKELY(_of->callbacks.read==NULL))return OP_EREAD;\n  /*Initialize the framing state.*/\n  ogg_sync_init(&_of->oy);\n  /*Perhaps some data was previously read into a buffer for testing against\n     other stream types.\n    Allow initialization from this previously read data (especially as we may\n     be reading from a non-seekable stream).\n    This requires copying it into a buffer allocated by ogg_sync_buffer() and\n     doesn't support seeking, so this is not a good mechanism to use for\n     decoding entire files from RAM.*/\n  if(_initial_bytes>0){\n    char *buffer;\n    buffer=ogg_sync_buffer(&_of->oy,(long)_initial_bytes);\n    memcpy(buffer,_initial_data,_initial_bytes*sizeof(*buffer));\n    ogg_sync_wrote(&_of->oy,(long)_initial_bytes);\n  }\n  /*Can we seek?\n    Stevens suggests the seek test is portable.*/\n  seekable=_cb->seek!=NULL&&(*_cb->seek)(_stream,0,SEEK_CUR)!=-1;\n  /*If seek is implemented, tell must also be implemented.*/\n  if(seekable){\n    opus_int64 pos;\n    if(OP_UNLIKELY(_of->callbacks.tell==NULL))return OP_EINVAL;\n    pos=(*_of->callbacks.tell)(_of->stream);\n    /*If the current position is not equal to the initial bytes consumed,\n       absolute seeking will not work.*/\n    if(OP_UNLIKELY(pos!=(opus_int64)_initial_bytes))return OP_EINVAL;\n  }\n  _of->seekable=seekable;\n  /*Don't seek yet.\n    Set up a 'single' (current) logical bitstream entry for partial open.*/\n  _of->links=(OggOpusLink *)_ogg_malloc(sizeof(*_of->links));\n  /*The serialno gets filled in later by op_fetch_headers().*/\n  ogg_stream_init(&_of->os,-1);\n  pog=NULL;\n  for(;;){\n    /*Fetch all BOS pages, store the Opus header and all seen serial numbers,\n      and load subsequent Opus setup headers.*/\n    ret=op_fetch_headers(_of,&_of->links[0].head,&_of->links[0].tags,\n     &_of->serialnos,&_of->nserialnos,&_of->cserialnos,pog);\n    if(OP_UNLIKELY(ret<0))break;\n    _of->nlinks=1;\n    _of->links[0].offset=0;\n    _of->links[0].data_offset=_of->offset;\n    _of->links[0].pcm_end=-1;\n    _of->links[0].serialno=_of->os.serialno;\n    /*Fetch the initial PCM offset.*/\n    ret=op_find_initial_pcm_offset(_of,_of->links,&og);\n    if(seekable||OP_LIKELY(ret<=0))break;\n    /*This link was empty, but we already have the BOS page for the next one in\n       og.\n      We can't seek, so start processing the next link right now.*/\n    opus_tags_clear(&_of->links[0].tags);\n    _of->nlinks=0;\n    if(!seekable)_of->cur_link++;\n    pog=&og;\n  }\n  if(OP_LIKELY(ret>=0))_of->ready_state=OP_PARTOPEN;\n  return ret;\n}\n\nstatic int op_open2(OggOpusFile *_of){\n  int ret;\n  OP_ASSERT(_of->ready_state==OP_PARTOPEN);\n  if(_of->seekable){\n    _of->ready_state=OP_OPENED;\n    ret=op_open_seekable2(_of);\n  }\n  else ret=0;\n  if(OP_LIKELY(ret>=0)){\n    /*We have buffered packets from op_find_initial_pcm_offset().\n      Move to OP_INITSET so we can use them.*/\n    _of->ready_state=OP_STREAMSET;\n    ret=op_make_decode_ready(_of);\n    if(OP_LIKELY(ret>=0))return 0;\n  }\n  /*Don't auto-close the stream on failure.*/\n  _of->callbacks.close=NULL;\n  op_clear(_of);\n  return ret;\n}\n\nOggOpusFile *op_test_callbacks(void *_stream,const OpusFileCallbacks *_cb,\n const unsigned char *_initial_data,size_t _initial_bytes,int *_error){\n  OggOpusFile *of;\n  int          ret;\n  of=(OggOpusFile *)_ogg_malloc(sizeof(*of));\n  ret=OP_EFAULT;\n  if(OP_LIKELY(of!=NULL)){\n    ret=op_open1(of,_stream,_cb,_initial_data,_initial_bytes);\n    if(OP_LIKELY(ret>=0)){\n      if(_error!=NULL)*_error=0;\n      return of;\n    }\n    /*Don't auto-close the stream on failure.*/\n    of->callbacks.close=NULL;\n    op_clear(of);\n    _ogg_free(of);\n  }\n  if(_error!=NULL)*_error=ret;\n  return NULL;\n}\n\nOggOpusFile *op_open_callbacks(void *_stream,const OpusFileCallbacks *_cb,\n const unsigned char *_initial_data,size_t _initial_bytes,int *_error){\n  OggOpusFile *of;\n  of=op_test_callbacks(_stream,_cb,_initial_data,_initial_bytes,_error);\n  if(OP_LIKELY(of!=NULL)){\n    int ret;\n    ret=op_open2(of);\n    if(OP_LIKELY(ret>=0))return of;\n    if(_error!=NULL)*_error=ret;\n    _ogg_free(of);\n  }\n  return NULL;\n}\n\n/*Convenience routine to clean up from failure for the open functions that\n   create their own streams.*/\nstatic OggOpusFile *op_open_close_on_failure(void *_stream,\n const OpusFileCallbacks *_cb,int *_error){\n  OggOpusFile *of;\n  if(OP_UNLIKELY(_stream==NULL)){\n    if(_error!=NULL)*_error=OP_EFAULT;\n    return NULL;\n  }\n  of=op_open_callbacks(_stream,_cb,NULL,0,_error);\n  if(OP_UNLIKELY(of==NULL))(*_cb->close)(_stream);\n  return of;\n}\n\nOggOpusFile *op_open_file(const char *_path,int *_error){\n  OpusFileCallbacks cb;\n  return op_open_close_on_failure(op_fopen(&cb,_path,\"rb\"),&cb,_error);\n}\n\nOggOpusFile *op_open_memory(const unsigned char *_data,size_t _size,\n int *_error){\n  OpusFileCallbacks cb;\n  return op_open_close_on_failure(op_mem_stream_create(&cb,_data,_size),&cb,\n   _error);\n}\n\n/*Convenience routine to clean up from failure for the open functions that\n   create their own streams.*/\nstatic OggOpusFile *op_test_close_on_failure(void *_stream,\n const OpusFileCallbacks *_cb,int *_error){\n  OggOpusFile *of;\n  if(OP_UNLIKELY(_stream==NULL)){\n    if(_error!=NULL)*_error=OP_EFAULT;\n    return NULL;\n  }\n  of=op_test_callbacks(_stream,_cb,NULL,0,_error);\n  if(OP_UNLIKELY(of==NULL))(*_cb->close)(_stream);\n  return of;\n}\n\nOggOpusFile *op_test_file(const char *_path,int *_error){\n  OpusFileCallbacks cb;\n  return op_test_close_on_failure(op_fopen(&cb,_path,\"rb\"),&cb,_error);\n}\n\nOggOpusFile *op_test_memory(const unsigned char *_data,size_t _size,\n int *_error){\n  OpusFileCallbacks cb;\n  return op_test_close_on_failure(op_mem_stream_create(&cb,_data,_size),&cb,\n   _error);\n}\n\nint op_test_open(OggOpusFile *_of){\n  int ret;\n  if(OP_UNLIKELY(_of->ready_state!=OP_PARTOPEN))return OP_EINVAL;\n  ret=op_open2(_of);\n  /*op_open2() will clear this structure on failure.\n    Reset its contents to prevent double-frees in op_free().*/\n  if(OP_UNLIKELY(ret<0))memset(_of,0,sizeof(*_of));\n  return ret;\n}\n\nvoid op_free(OggOpusFile *_of){\n  if(OP_LIKELY(_of!=NULL)){\n    op_clear(_of);\n    _ogg_free(_of);\n  }\n}\n\nint op_seekable(const OggOpusFile *_of){\n  return _of->seekable;\n}\n\nint op_link_count(const OggOpusFile *_of){\n  return _of->nlinks;\n}\n\nopus_uint32 op_serialno(const OggOpusFile *_of,int _li){\n  if(OP_UNLIKELY(_li>=_of->nlinks))_li=_of->nlinks-1;\n  if(!_of->seekable)_li=0;\n  return _of->links[_li<0?_of->cur_link:_li].serialno;\n}\n\nint op_channel_count(const OggOpusFile *_of,int _li){\n  return op_head(_of,_li)->channel_count;\n}\n\nopus_int64 op_raw_total(const OggOpusFile *_of,int _li){\n  if(OP_UNLIKELY(_of->ready_state<OP_OPENED)\n   ||OP_UNLIKELY(!_of->seekable)\n   ||OP_UNLIKELY(_li>=_of->nlinks)){\n    return OP_EINVAL;\n  }\n  if(_li<0)return _of->end;\n  return (_li+1>=_of->nlinks?_of->end:_of->links[_li+1].offset)\n   -(_li>0?_of->links[_li].offset:0);\n}\n\nogg_int64_t op_pcm_total(const OggOpusFile *_of,int _li){\n  OggOpusLink *links;\n  ogg_int64_t  pcm_total;\n  ogg_int64_t  diff;\n  int          nlinks;\n  nlinks=_of->nlinks;\n  if(OP_UNLIKELY(_of->ready_state<OP_OPENED)\n   ||OP_UNLIKELY(!_of->seekable)\n   ||OP_UNLIKELY(_li>=nlinks)){\n    return OP_EINVAL;\n  }\n  links=_of->links;\n  /*We verify that the granule position differences are larger than the\n     pre-skip and that the total duration does not overflow during link\n     enumeration, so we don't have to check here.*/\n  pcm_total=0;\n  if(_li<0){\n    pcm_total=links[nlinks-1].pcm_file_offset;\n    _li=nlinks-1;\n  }\n  OP_ALWAYS_TRUE(!op_granpos_diff(&diff,\n   links[_li].pcm_end,links[_li].pcm_start));\n  return pcm_total+diff-links[_li].head.pre_skip;\n}\n\nconst OpusHead *op_head(const OggOpusFile *_of,int _li){\n  if(OP_UNLIKELY(_li>=_of->nlinks))_li=_of->nlinks-1;\n  if(!_of->seekable)_li=0;\n  return &_of->links[_li<0?_of->cur_link:_li].head;\n}\n\nconst OpusTags *op_tags(const OggOpusFile *_of,int _li){\n  if(OP_UNLIKELY(_li>=_of->nlinks))_li=_of->nlinks-1;\n  if(!_of->seekable){\n    if(_of->ready_state<OP_STREAMSET&&_of->ready_state!=OP_PARTOPEN){\n      return NULL;\n    }\n    _li=0;\n  }\n  else if(_li<0)_li=_of->ready_state>=OP_STREAMSET?_of->cur_link:0;\n  return &_of->links[_li].tags;\n}\n\nint op_current_link(const OggOpusFile *_of){\n  if(OP_UNLIKELY(_of->ready_state<OP_OPENED))return OP_EINVAL;\n  return _of->cur_link;\n}\n\n/*Compute an average bitrate given a byte and sample count.\n  Return: The bitrate in bits per second.*/\nstatic opus_int32 op_calc_bitrate(opus_int64 _bytes,ogg_int64_t _samples){\n  /*These rates are absurd, but let's handle them anyway.*/\n  if(OP_UNLIKELY(_bytes>(OP_INT64_MAX-(_samples>>1))/(48000*8))){\n    ogg_int64_t den;\n    if(OP_UNLIKELY(_bytes/(OP_INT32_MAX/(48000*8))>=_samples)){\n      return OP_INT32_MAX;\n    }\n    den=_samples/(48000*8);\n    return (opus_int32)((_bytes+(den>>1))/den);\n  }\n  if(OP_UNLIKELY(_samples<=0))return OP_INT32_MAX;\n  /*This can't actually overflow in normal operation: even with a pre-skip of\n     545 2.5 ms frames with 8 streams running at 1282*8+1 bytes per packet\n     (1275 byte frames + Opus framing overhead + Ogg lacing values), that all\n     produce a single sample of decoded output, we still don't top 45 Mbps.\n    The only way to get bitrates larger than that is with excessive Opus\n     padding, more encoded streams than output channels, or lots and lots of\n     Ogg pages with no packets on them.*/\n  return (opus_int32)OP_MIN((_bytes*48000*8+(_samples>>1))/_samples,\n   OP_INT32_MAX);\n}\n\nopus_int32 op_bitrate(const OggOpusFile *_of,int _li){\n  if(OP_UNLIKELY(_of->ready_state<OP_OPENED)||OP_UNLIKELY(!_of->seekable)\n   ||OP_UNLIKELY(_li>=_of->nlinks)){\n    return OP_EINVAL;\n  }\n  return op_calc_bitrate(op_raw_total(_of,_li),op_pcm_total(_of,_li));\n}\n\nopus_int32 op_bitrate_instant(OggOpusFile *_of){\n  ogg_int64_t samples_tracked;\n  opus_int32  ret;\n  if(OP_UNLIKELY(_of->ready_state<OP_OPENED))return OP_EINVAL;\n  samples_tracked=_of->samples_tracked;\n  if(OP_UNLIKELY(samples_tracked==0))return OP_FALSE;\n  ret=op_calc_bitrate(_of->bytes_tracked,samples_tracked);\n  _of->bytes_tracked=0;\n  _of->samples_tracked=0;\n  return ret;\n}\n\n/*Given a serialno, find a link with a corresponding Opus stream, if it exists.\n  Return: The index of the link to which the page belongs, or a negative number\n           if it was not a desired Opus bitstream section.*/\nstatic int op_get_link_from_serialno(const OggOpusFile *_of,int _cur_link,\n opus_int64 _page_offset,ogg_uint32_t _serialno){\n  const OggOpusLink *links;\n  int                nlinks;\n  int                li_lo;\n  int                li_hi;\n  OP_ASSERT(_of->seekable);\n  links=_of->links;\n  nlinks=_of->nlinks;\n  li_lo=0;\n  /*Start off by guessing we're just a multiplexed page in the current link.*/\n  li_hi=_cur_link+1<nlinks&&_page_offset<links[nlinks+1].offset?\n   _cur_link+1:nlinks;\n  do{\n    if(_page_offset>=links[_cur_link].offset)li_lo=_cur_link;\n    else li_hi=_cur_link;\n    _cur_link=li_lo+(li_hi-li_lo>>1);\n  }\n  while(li_hi-li_lo>1);\n  /*We've identified the link that should contain this page.\n    Make sure it's a page we care about.*/\n  if(links[_cur_link].serialno!=_serialno)return OP_FALSE;\n  return _cur_link;\n}\n\n/*Fetch and process a page.\n  This handles the case where we're at a bitstream boundary and dumps the\n   decoding machine.\n  If the decoding machine is unloaded, it loads it.\n  It also keeps prev_packet_gp up to date (seek and read both use this).\n  Return: <0) Error, OP_HOLE (lost packet), or OP_EOF.\n           0) Got at least one audio data packet.*/\nstatic int op_fetch_and_process_page(OggOpusFile *_of,\n ogg_page *_og,opus_int64 _page_offset,int _spanp,int _ignore_holes){\n  OggOpusLink  *links;\n  ogg_uint32_t  cur_serialno;\n  int           seekable;\n  int           cur_link;\n  int           ret;\n  /*We shouldn't get here if we have unprocessed packets.*/\n  OP_ASSERT(_of->ready_state<OP_INITSET||_of->op_pos>=_of->op_count);\n  seekable=_of->seekable;\n  links=_of->links;\n  cur_link=seekable?_of->cur_link:0;\n  cur_serialno=links[cur_link].serialno;\n  /*Handle one page.*/\n  for(;;){\n    ogg_page og;\n    OP_ASSERT(_of->ready_state>=OP_OPENED);\n    /*If we were given a page to use, use it.*/\n    if(_og!=NULL){\n      *&og=*_og;\n      _og=NULL;\n    }\n    /*Keep reading until we get a page with the correct serialno.*/\n    else _page_offset=op_get_next_page(_of,&og,_of->end);\n    /*EOF: Leave uninitialized.*/\n    if(_page_offset<0)return _page_offset<OP_FALSE?(int)_page_offset:OP_EOF;\n    if(OP_LIKELY(_of->ready_state>=OP_STREAMSET)\n     &&cur_serialno!=(ogg_uint32_t)ogg_page_serialno(&og)){\n      /*Two possibilities:\n         1) Another stream is multiplexed into this logical section, or*/\n      if(OP_LIKELY(!ogg_page_bos(&og)))continue;\n      /* 2) Our decoding just traversed a bitstream boundary.*/\n      if(!_spanp)return OP_EOF;\n      if(OP_LIKELY(_of->ready_state>=OP_INITSET))op_decode_clear(_of);\n    }\n    /*Bitrate tracking: add the header's bytes here.\n      The body bytes are counted when we consume the packets.*/\n    else _of->bytes_tracked+=og.header_len;\n    /*Do we need to load a new machine before submitting the page?\n      This is different in the seekable and non-seekable cases.\n      In the seekable case, we already have all the header information loaded\n       and cached.\n      We just initialize the machine with it and continue on our merry way.\n      In the non-seekable (streaming) case, we'll only be at a boundary if we\n       just left the previous logical bitstream, and we're now nominally at the\n       header of the next bitstream.*/\n    if(OP_UNLIKELY(_of->ready_state<OP_STREAMSET)){\n      if(seekable){\n        ogg_uint32_t serialno;\n        serialno=ogg_page_serialno(&og);\n        /*Match the serialno to bitstream section.*/\n        OP_ASSERT(cur_link>=0&&cur_link<_of->nlinks);\n        if(links[cur_link].serialno!=serialno){\n          /*It wasn't a page from the current link.\n            Is it from the next one?*/\n          if(OP_LIKELY(cur_link+1<_of->nlinks&&links[cur_link+1].serialno==\n           serialno)){\n            cur_link++;\n          }\n          else{\n            int new_link;\n            new_link=\n             op_get_link_from_serialno(_of,cur_link,_page_offset,serialno);\n            /*Not a desired Opus bitstream section.\n              Keep trying.*/\n            if(new_link<0)continue;\n            cur_link=new_link;\n          }\n        }\n        cur_serialno=serialno;\n        _of->cur_link=cur_link;\n        ogg_stream_reset_serialno(&_of->os,serialno);\n        _of->ready_state=OP_STREAMSET;\n        /*If we're at the start of this link, initialize the granule position\n           and pre-skip tracking.*/\n        if(_page_offset<=links[cur_link].data_offset){\n          _of->prev_packet_gp=links[cur_link].pcm_start;\n          _of->prev_page_offset=-1;\n          _of->cur_discard_count=links[cur_link].head.pre_skip;\n          /*Ignore a hole at the start of a new link (this is common for\n             streams joined in the middle) or after seeking.*/\n          _ignore_holes=1;\n        }\n      }\n      else{\n        do{\n          /*We're streaming.\n            Fetch the two header packets, build the info struct.*/\n          ret=op_fetch_headers(_of,&links[0].head,&links[0].tags,\n           NULL,NULL,NULL,&og);\n          if(OP_UNLIKELY(ret<0))return ret;\n          /*op_find_initial_pcm_offset() will suppress any initial hole for us,\n             so no need to set _ignore_holes.*/\n          ret=op_find_initial_pcm_offset(_of,links,&og);\n          if(OP_UNLIKELY(ret<0))return ret;\n          _of->links[0].serialno=cur_serialno=_of->os.serialno;\n          _of->cur_link++;\n        }\n        /*If the link was empty, keep going, because we already have the\n           BOS page of the next one in og.*/\n        while(OP_UNLIKELY(ret>0));\n        /*If we didn't get any packets out of op_find_initial_pcm_offset(),\n           keep going (this is possible if end-trimming trimmed them all).*/\n        if(_of->op_count<=0)continue;\n        /*Otherwise, we're done.\n          TODO: This resets bytes_tracked, which misses the header bytes\n           already processed by op_find_initial_pcm_offset().*/\n        ret=op_make_decode_ready(_of);\n        if(OP_UNLIKELY(ret<0))return ret;\n        return 0;\n      }\n    }\n    /*The buffered page is the data we want, and we're ready for it.\n      Add it to the stream state.*/\n    if(OP_UNLIKELY(_of->ready_state==OP_STREAMSET)){\n      ret=op_make_decode_ready(_of);\n      if(OP_UNLIKELY(ret<0))return ret;\n    }\n    /*Extract all the packets from the current page.*/\n    ogg_stream_pagein(&_of->os,&og);\n    if(OP_LIKELY(_of->ready_state>=OP_INITSET)){\n      opus_int32 total_duration;\n      int        durations[255];\n      int        op_count;\n      int        report_hole;\n      report_hole=0;\n      total_duration=op_collect_audio_packets(_of,durations);\n      if(OP_UNLIKELY(total_duration<0)){\n        /*libogg reported a hole (a gap in the page sequence numbers).\n          Drain the packets from the page anyway.\n          If we don't, they'll still be there when we fetch the next page.\n          Then, when we go to pull out packets, we might get more than 255,\n           which would overrun our packet buffer.*/\n        total_duration=op_collect_audio_packets(_of,durations);\n        OP_ASSERT(total_duration>=0);\n        if(!_ignore_holes){\n          /*Report the hole to the caller after we finish timestamping the\n             packets.*/\n          report_hole=1;\n          /*We had lost or damaged pages, so reset our granule position\n             tracking.\n            This makes holes behave the same as a small raw seek.\n            If the next page is the EOS page, we'll discard it (because we\n             can't perform end trimming properly), and we'll always discard at\n             least 80 ms of audio (to allow decoder state to re-converge).\n            We could try to fill in the gap with PLC by looking at timestamps\n             in the non-EOS case, but that's complicated and error prone and we\n             can't rely on the timestamps being valid.*/\n          _of->prev_packet_gp=-1;\n        }\n      }\n      op_count=_of->op_count;\n      /*If we found at least one audio data packet, compute per-packet granule\n         positions for them.*/\n      if(op_count>0){\n        ogg_int64_t diff;\n        ogg_int64_t prev_packet_gp;\n        ogg_int64_t cur_packet_gp;\n        ogg_int64_t cur_page_gp;\n        int         cur_page_eos;\n        int         pi;\n        cur_page_gp=_of->op[op_count-1].granulepos;\n        cur_page_eos=_of->op[op_count-1].e_o_s;\n        prev_packet_gp=_of->prev_packet_gp;\n        if(OP_UNLIKELY(prev_packet_gp==-1)){\n          opus_int32 cur_discard_count;\n          /*This is the first call after a raw seek.\n            Try to reconstruct prev_packet_gp from scratch.*/\n          OP_ASSERT(seekable);\n          if(OP_UNLIKELY(cur_page_eos)){\n            /*If the first page we hit after our seek was the EOS page, and\n               we didn't start from data_offset or before, we don't have\n               enough information to do end-trimming.\n              Proceed to the next link, rather than risk playing back some\n               samples that shouldn't have been played.*/\n            _of->op_count=0;\n            if(report_hole)return OP_HOLE;\n            continue;\n          }\n          /*By default discard 80 ms of data after a seek, unless we seek\n             into the pre-skip region.*/\n          cur_discard_count=80*48;\n          cur_page_gp=_of->op[op_count-1].granulepos;\n          /*Try to initialize prev_packet_gp.\n            If the current page had packets but didn't have a granule\n             position, or the granule position it had was too small (both\n             illegal), just use the starting granule position for the link.*/\n          prev_packet_gp=links[cur_link].pcm_start;\n          if(OP_LIKELY(cur_page_gp!=-1)){\n            op_granpos_add(&prev_packet_gp,cur_page_gp,-total_duration);\n          }\n          if(OP_LIKELY(!op_granpos_diff(&diff,\n           prev_packet_gp,links[cur_link].pcm_start))){\n            opus_int32 pre_skip;\n            /*If we start at the beginning of the pre-skip region, or we're\n               at least 80 ms from the end of the pre-skip region, we discard\n               to the end of the pre-skip region.\n              Otherwise, we still use the 80 ms default, which will discard\n               past the end of the pre-skip region.*/\n            pre_skip=links[cur_link].head.pre_skip;\n            if(diff>=0&&diff<=OP_MAX(0,pre_skip-80*48)){\n              cur_discard_count=pre_skip-(int)diff;\n            }\n          }\n          _of->cur_discard_count=cur_discard_count;\n        }\n        if(OP_UNLIKELY(cur_page_gp==-1)){\n          /*This page had completed packets but didn't have a valid granule\n             position.\n            This is illegal, but we'll try to handle it by continuing to count\n             forwards from the previous page.*/\n          if(op_granpos_add(&cur_page_gp,prev_packet_gp,total_duration)<0){\n            /*The timestamp for this page overflowed.*/\n            cur_page_gp=links[cur_link].pcm_end;\n          }\n        }\n        /*If we hit the last page, handle end-trimming.*/\n        if(OP_UNLIKELY(cur_page_eos)\n         &&OP_LIKELY(!op_granpos_diff(&diff,cur_page_gp,prev_packet_gp))\n         &&OP_LIKELY(diff<total_duration)){\n          cur_packet_gp=prev_packet_gp;\n          for(pi=0;pi<op_count;pi++){\n            diff=durations[pi]-diff;\n            /*If we have samples to trim...*/\n            if(diff>0){\n              /*If we trimmed the entire packet, stop (the spec says encoders\n                 shouldn't do this, but we support it anyway).*/\n              if(OP_UNLIKELY(diff>durations[pi]))break;\n              cur_packet_gp=cur_page_gp;\n              /*Move the EOS flag to this packet, if necessary, so we'll trim\n                 the samples during decode.*/\n              _of->op[pi].e_o_s=1;\n            }\n            else{\n              /*Update the granule position as normal.*/\n              OP_ALWAYS_TRUE(!op_granpos_add(&cur_packet_gp,\n               cur_packet_gp,durations[pi]));\n            }\n            _of->op[pi].granulepos=cur_packet_gp;\n            OP_ALWAYS_TRUE(!op_granpos_diff(&diff,cur_page_gp,cur_packet_gp));\n          }\n        }\n        else{\n          /*Propagate timestamps to earlier packets.\n            op_granpos_add(&prev_packet_gp,prev_packet_gp,total_duration)\n             should succeed and give prev_packet_gp==cur_page_gp.\n            But we don't bother to check that, as there isn't much we can do\n             if it's not true, and it actually will not be true on the first\n             page after a seek, if there was a continued packet.\n            The only thing we guarantee is that the start and end granule\n             positions of the packets are valid, and that they are monotonic\n             within a page.\n            They might be completely out of range for this link (we'll check\n             that elsewhere), or non-monotonic between pages.*/\n          if(OP_UNLIKELY(op_granpos_add(&prev_packet_gp,\n           cur_page_gp,-total_duration)<0)){\n            /*The starting timestamp for the first packet on this page\n               underflowed.\n              This is illegal, but we ignore it.*/\n            prev_packet_gp=0;\n          }\n          for(pi=0;pi<op_count;pi++){\n            if(OP_UNLIKELY(op_granpos_add(&cur_packet_gp,\n             cur_page_gp,-total_duration)<0)){\n              /*The start timestamp for this packet underflowed.\n                This is illegal, but we ignore it.*/\n              cur_packet_gp=0;\n            }\n            total_duration-=durations[pi];\n            OP_ASSERT(total_duration>=0);\n            OP_ALWAYS_TRUE(!op_granpos_add(&cur_packet_gp,\n             cur_packet_gp,durations[pi]));\n            _of->op[pi].granulepos=cur_packet_gp;\n          }\n          OP_ASSERT(total_duration==0);\n        }\n        _of->prev_packet_gp=prev_packet_gp;\n        _of->prev_page_offset=_page_offset;\n        _of->op_count=op_count=pi;\n      }\n      if(report_hole)return OP_HOLE;\n      /*If end-trimming didn't trim all the packets, we're done.*/\n      if(op_count>0)return 0;\n    }\n  }\n}\n\nint op_raw_seek(OggOpusFile *_of,opus_int64 _pos){\n  int ret;\n  if(OP_UNLIKELY(_of->ready_state<OP_OPENED))return OP_EINVAL;\n  /*Don't dump the decoder state if we can't seek.*/\n  if(OP_UNLIKELY(!_of->seekable))return OP_ENOSEEK;\n  if(OP_UNLIKELY(_pos<0)||OP_UNLIKELY(_pos>_of->end))return OP_EINVAL;\n  /*Clear out any buffered, decoded data.*/\n  op_decode_clear(_of);\n  _of->bytes_tracked=0;\n  _of->samples_tracked=0;\n  ret=op_seek_helper(_of,_pos);\n  if(OP_UNLIKELY(ret<0))return OP_EREAD;\n  ret=op_fetch_and_process_page(_of,NULL,-1,1,1);\n  /*If we hit EOF, op_fetch_and_process_page() leaves us uninitialized.\n    Instead, jump to the end.*/\n  if(ret==OP_EOF){\n    int cur_link;\n    op_decode_clear(_of);\n    cur_link=_of->nlinks-1;\n    _of->cur_link=cur_link;\n    _of->prev_packet_gp=_of->links[cur_link].pcm_end;\n    _of->cur_discard_count=0;\n    ret=0;\n  }\n  return ret;\n}\n\n/*Convert a PCM offset relative to the start of the whole stream to a granule\n   position in an individual link.*/\nstatic ogg_int64_t op_get_granulepos(const OggOpusFile *_of,\n ogg_int64_t _pcm_offset,int *_li){\n  const OggOpusLink *links;\n  ogg_int64_t        duration;\n  ogg_int64_t        pcm_start;\n  opus_int32         pre_skip;\n  int                nlinks;\n  int                li_lo;\n  int                li_hi;\n  OP_ASSERT(_pcm_offset>=0);\n  nlinks=_of->nlinks;\n  links=_of->links;\n  li_lo=0;\n  li_hi=nlinks;\n  do{\n    int li;\n    li=li_lo+(li_hi-li_lo>>1);\n    if(links[li].pcm_file_offset<=_pcm_offset)li_lo=li;\n    else li_hi=li;\n  }\n  while(li_hi-li_lo>1);\n  _pcm_offset-=links[li_lo].pcm_file_offset;\n  pcm_start=links[li_lo].pcm_start;\n  pre_skip=links[li_lo].head.pre_skip;\n  OP_ALWAYS_TRUE(!op_granpos_diff(&duration,links[li_lo].pcm_end,pcm_start));\n  duration-=pre_skip;\n  if(_pcm_offset>=duration)return -1;\n  _pcm_offset+=pre_skip;\n  if(OP_UNLIKELY(pcm_start>OP_INT64_MAX-_pcm_offset)){\n    /*Adding this amount to the granule position would overflow the positive\n       half of its 64-bit range.\n      Since signed overflow is undefined in C, do it in a way the compiler\n       isn't allowed to screw up.*/\n    _pcm_offset-=OP_INT64_MAX-pcm_start+1;\n    pcm_start=OP_INT64_MIN;\n  }\n  pcm_start+=_pcm_offset;\n  *_li=li_lo;\n  return pcm_start;\n}\n\n/*A small helper to determine if an Ogg page contains data that continues onto\n   a subsequent page.*/\nstatic int op_page_continues(const ogg_page *_og){\n  int nlacing;\n  OP_ASSERT(_og->header_len>=27);\n  nlacing=_og->header[26];\n  OP_ASSERT(_og->header_len>=27+nlacing);\n  /*This also correctly handles the (unlikely) case of nlacing==0, because\n     0!=255.*/\n  return _og->header[27+nlacing-1]==255;\n}\n\n/*A small helper to buffer the continued packet data from a page.*/\nstatic void op_buffer_continued_data(OggOpusFile *_of,ogg_page *_og){\n  ogg_packet op;\n  ogg_stream_pagein(&_of->os,_og);\n  /*Drain any packets that did end on this page (and ignore holes).\n    We only care about the continued packet data.*/\n  while(ogg_stream_packetout(&_of->os,&op));\n}\n\n/*This controls how close the target has to be to use the current stream\n   position to subdivide the initial range.\n  Two minutes seems to be a good default.*/\n#define OP_CUR_TIME_THRESH (120*48*(opus_int32)1000)\n\n/*Note: The OP_SMALL_FOOTPRINT #define doesn't (currently) save much code size,\n   but it's meant to serve as documentation for portions of the seeking\n   algorithm that are purely optional, to aid others learning from/porting this\n   code to other contexts.*/\n/*#define OP_SMALL_FOOTPRINT (1)*/\n\n/*Search within link _li for the page with the highest granule position\n   preceding (or equal to) _target_gp.\n  There is a danger here: missing pages or incorrect frame number information\n   in the bitstream could make our task impossible.\n  Account for that (and report it as an error condition).*/\nstatic int op_pcm_seek_page(OggOpusFile *_of,\n ogg_int64_t _target_gp,int _li){\n  const OggOpusLink *link;\n  ogg_page           og;\n  ogg_int64_t        pcm_pre_skip;\n  ogg_int64_t        pcm_start;\n  ogg_int64_t        pcm_end;\n  ogg_int64_t        best_gp;\n  ogg_int64_t        diff;\n  ogg_uint32_t       serialno;\n  opus_int32         pre_skip;\n  opus_int64         begin;\n  opus_int64         end;\n  opus_int64         boundary;\n  opus_int64         best;\n  opus_int64         best_start;\n  opus_int64         page_offset;\n  opus_int64         d0;\n  opus_int64         d1;\n  opus_int64         d2;\n  int                force_bisect;\n  int                buffering;\n  int                ret;\n  _of->bytes_tracked=0;\n  _of->samples_tracked=0;\n  link=_of->links+_li;\n  best_gp=pcm_start=link->pcm_start;\n  pcm_end=link->pcm_end;\n  serialno=link->serialno;\n  best=best_start=begin=link->data_offset;\n  page_offset=-1;\n  buffering=0;\n  /*We discard the first 80 ms of data after a seek, so seek back that much\n     farther.\n    If we can't, simply seek to the beginning of the link.*/\n  if(OP_UNLIKELY(op_granpos_add(&_target_gp,_target_gp,-80*48)<0)\n   ||OP_UNLIKELY(op_granpos_cmp(_target_gp,pcm_start)<0)){\n    _target_gp=pcm_start;\n  }\n  /*Special case seeking to the start of the link.*/\n  pre_skip=link->head.pre_skip;\n  OP_ALWAYS_TRUE(!op_granpos_add(&pcm_pre_skip,pcm_start,pre_skip));\n  if(op_granpos_cmp(_target_gp,pcm_pre_skip)<0)end=boundary=begin;\n  else{\n    end=boundary=link->end_offset;\n#if !defined(OP_SMALL_FOOTPRINT)\n    /*If we were decoding from this link, we can narrow the range a bit.*/\n    if(_li==_of->cur_link&&_of->ready_state>=OP_INITSET){\n      opus_int64 offset;\n      int        op_count;\n      op_count=_of->op_count;\n      /*The only way the offset can be invalid _and_ we can fail the granule\n         position checks below is if someone changed the contents of the last\n         page since we read it.\n        We'd be within our rights to just return OP_EBADLINK in that case, but\n         we'll simply ignore the current position instead.*/\n      offset=_of->offset;\n      if(op_count>0&&OP_LIKELY(offset<=end)){\n        ogg_int64_t gp;\n        /*Make sure the timestamp is valid.\n          The granule position might be -1 if we collected the packets from a\n           page without a granule position after reporting a hole.*/\n        gp=_of->op[op_count-1].granulepos;\n        if(OP_LIKELY(gp!=-1)&&OP_LIKELY(op_granpos_cmp(pcm_start,gp)<0)\n         &&OP_LIKELY(op_granpos_cmp(pcm_end,gp)>0)){\n          OP_ALWAYS_TRUE(!op_granpos_diff(&diff,gp,_target_gp));\n          /*We only actually use the current time if either\n            a) We can cut off at least half the range, or\n            b) We're seeking sufficiently close to the current position that\n                it's likely to be informative.\n            Otherwise it appears using the whole link range to estimate the\n             first seek location gives better results, on average.*/\n          if(diff<0){\n            OP_ASSERT(offset>=begin);\n            if(offset-begin>=end-begin>>1||diff>-OP_CUR_TIME_THRESH){\n              best=begin=offset;\n              best_gp=pcm_start=gp;\n              /*If we have buffered data from a continued packet, remember the\n                 offset of the previous page's start, so that if we do wind up\n                 having to seek back here later, we can prime the stream with\n                 the continued packet data.\n                With no continued packet, we remember the end of the page.*/\n              best_start=_of->os.body_returned<_of->os.body_fill?\n               _of->prev_page_offset:best;\n              /*If there's completed packets and data in the stream state,\n                 prev_page_offset should always be set.*/\n              OP_ASSERT(best_start>=0);\n              /*Buffer any continued packet data starting from here.*/\n              buffering=1;\n            }\n          }\n          else{\n            ogg_int64_t prev_page_gp;\n            /*We might get lucky and already have the packet with the target\n               buffered.\n              Worth checking.\n              For very small files (with all of the data in a single page,\n               generally 1 second or less), we can loop them continuously\n               without seeking at all.*/\n            OP_ALWAYS_TRUE(!op_granpos_add(&prev_page_gp,_of->op[0].granulepos,\n             -op_get_packet_duration(_of->op[0].packet,_of->op[0].bytes)));\n            if(op_granpos_cmp(prev_page_gp,_target_gp)<=0){\n              /*Don't call op_decode_clear(), because it will dump our\n                 packets.*/\n              _of->op_pos=0;\n              _of->od_buffer_size=0;\n              _of->prev_packet_gp=prev_page_gp;\n              /*_of->prev_page_offset already points to the right place.*/\n              _of->ready_state=OP_STREAMSET;\n              return op_make_decode_ready(_of);\n            }\n            /*No such luck.\n              Check if we can cut off at least half the range, though.*/\n            if(offset-begin<=end-begin>>1||diff<OP_CUR_TIME_THRESH){\n              /*We really want the page start here, but this will do.*/\n              end=boundary=offset;\n              pcm_end=gp;\n            }\n          }\n        }\n      }\n    }\n#endif\n  }\n  /*This code was originally based on the \"new search algorithm by HB (Nicholas\n     Vinen)\" from libvorbisfile.\n    It has been modified substantially since.*/\n  op_decode_clear(_of);\n  if(!buffering)ogg_stream_reset_serialno(&_of->os,serialno);\n  _of->cur_link=_li;\n  _of->ready_state=OP_STREAMSET;\n  /*Initialize the interval size history.*/\n  d2=d1=d0=end-begin;\n  force_bisect=0;\n  while(begin<end){\n    opus_int64 bisect;\n    opus_int64 next_boundary;\n    opus_int32 chunk_size;\n    if(end-begin<OP_CHUNK_SIZE)bisect=begin;\n    else{\n      /*Update the interval size history.*/\n      d0=d1>>1;\n      d1=d2>>1;\n      d2=end-begin>>1;\n      if(force_bisect)bisect=begin+(end-begin>>1);\n      else{\n        ogg_int64_t diff2;\n        OP_ALWAYS_TRUE(!op_granpos_diff(&diff,_target_gp,pcm_start));\n        OP_ALWAYS_TRUE(!op_granpos_diff(&diff2,pcm_end,pcm_start));\n        /*Take a (pretty decent) guess.*/\n        bisect=begin+op_rescale64(diff,diff2,end-begin)-OP_CHUNK_SIZE;\n      }\n      if(bisect-OP_CHUNK_SIZE<begin)bisect=begin;\n      force_bisect=0;\n    }\n    if(bisect!=_of->offset){\n      /*Discard any buffered continued packet data.*/\n      if(buffering)ogg_stream_reset(&_of->os);\n      buffering=0;\n      page_offset=-1;\n      ret=op_seek_helper(_of,bisect);\n      if(OP_UNLIKELY(ret<0))return ret;\n    }\n    chunk_size=OP_CHUNK_SIZE;\n    next_boundary=boundary;\n    /*Now scan forward and figure out where we landed.\n      In the ideal case, we will see a page with a granule position at or\n       before our target, followed by a page with a granule position after our\n       target (or the end of the search interval).\n      Then we can just drop out and will have all of the data we need with no\n       additional seeking.\n      If we landed too far before, or after, we'll break out and do another\n       bisection.*/\n    while(begin<end){\n      page_offset=op_get_next_page(_of,&og,boundary);\n      if(page_offset<0){\n        if(page_offset<OP_FALSE)return (int)page_offset;\n        /*There are no more pages in our interval from our stream with a valid\n           timestamp that start at position bisect or later.*/\n        /*If we scanned the whole interval, we're done.*/\n        if(bisect<=begin+1)end=begin;\n        else{\n          /*Otherwise, back up one chunk.\n            First, discard any data from a continued packet.*/\n          if(buffering)ogg_stream_reset(&_of->os);\n          buffering=0;\n          bisect=OP_MAX(bisect-chunk_size,begin);\n          ret=op_seek_helper(_of,bisect);\n          if(OP_UNLIKELY(ret<0))return ret;\n          /*Bump up the chunk size.*/\n          chunk_size=OP_MIN(2*chunk_size,OP_CHUNK_SIZE_MAX);\n          /*If we did find a page from another stream or without a timestamp,\n             don't read past it.*/\n          boundary=next_boundary;\n        }\n      }\n      else{\n        ogg_int64_t gp;\n        int         has_packets;\n        /*Save the offset of the first page we found after the seek, regardless\n           of the stream it came from or whether or not it has a timestamp.*/\n        next_boundary=OP_MIN(page_offset,next_boundary);\n        if(serialno!=(ogg_uint32_t)ogg_page_serialno(&og))continue;\n        has_packets=ogg_page_packets(&og)>0;\n        /*Force the gp to -1 (as it should be per spec) if no packets end on\n           this page.\n          Otherwise we might get confused when we try to pull out a packet\n           with that timestamp and can't find it.*/\n        gp=has_packets?ogg_page_granulepos(&og):-1;\n        if(gp==-1){\n          if(buffering){\n            if(OP_LIKELY(!has_packets))ogg_stream_pagein(&_of->os,&og);\n            else{\n              /*If packets did end on this page, but we still didn't have a\n                 valid granule position (in violation of the spec!), stop\n                 buffering continued packet data.\n                Otherwise we might continue past the packet we actually\n                 wanted.*/\n              ogg_stream_reset(&_of->os);\n              buffering=0;\n            }\n          }\n          continue;\n        }\n        if(op_granpos_cmp(gp,_target_gp)<0){\n          /*We found a page that ends before our target.\n            Advance to the raw offset of the next page.*/\n          begin=_of->offset;\n          if(OP_UNLIKELY(op_granpos_cmp(pcm_start,gp)>0)\n           ||OP_UNLIKELY(op_granpos_cmp(pcm_end,gp)<0)){\n            /*Don't let pcm_start get out of range!\n              That could happen with an invalid timestamp.*/\n            break;\n          }\n          /*Save the byte offset of the end of the page with this granule\n             position.*/\n          best=best_start=begin;\n          /*Buffer any data from a continued packet, if necessary.\n            This avoids the need to seek back here if the next timestamp we\n             encounter while scanning forward lies after our target.*/\n          if(buffering)ogg_stream_reset(&_of->os);\n          if(op_page_continues(&og)){\n            op_buffer_continued_data(_of,&og);\n            /*If we have a continued packet, remember the offset of this\n               page's start, so that if we do wind up having to seek back here\n               later, we can prime the stream with the continued packet data.\n              With no continued packet, we remember the end of the page.*/\n            best_start=page_offset;\n          }\n          /*Then force buffering on, so that if a packet starts (but does not\n             end) on the next page, we still avoid the extra seek back.*/\n          buffering=1;\n          best_gp=pcm_start=gp;\n          OP_ALWAYS_TRUE(!op_granpos_diff(&diff,_target_gp,pcm_start));\n          /*If we're more than a second away from our target, break out and\n             do another bisection.*/\n          if(diff>48000)break;\n          /*Otherwise, keep scanning forward (do NOT use begin+1).*/\n          bisect=begin;\n        }\n        else{\n          /*We found a page that ends after our target.*/\n          /*If we scanned the whole interval before we found it, we're done.*/\n          if(bisect<=begin+1)end=begin;\n          else{\n            end=bisect;\n            /*In later iterations, don't read past the first page we found.*/\n            boundary=next_boundary;\n            /*If we're not making much progress shrinking the interval size,\n               start forcing straight bisection to limit the worst case.*/\n            force_bisect=end-begin>d0*2;\n            /*Don't let pcm_end get out of range!\n              That could happen with an invalid timestamp.*/\n            if(OP_LIKELY(op_granpos_cmp(pcm_end,gp)>0)\n             &&OP_LIKELY(op_granpos_cmp(pcm_start,gp)<=0)){\n              pcm_end=gp;\n            }\n            break;\n          }\n        }\n      }\n    }\n  }\n  /*Found our page.*/\n  OP_ASSERT(op_granpos_cmp(best_gp,pcm_start)>=0);\n  /*Seek, if necessary.\n    If we were buffering data from a continued packet, we should be able to\n     continue to scan forward to get the rest of the data (even if\n     page_offset==-1).\n    Otherwise, we need to seek back to best_start.*/\n  if(!buffering){\n    if(best_start!=page_offset){\n      page_offset=-1;\n      ret=op_seek_helper(_of,best_start);\n      if(OP_UNLIKELY(ret<0))return ret;\n    }\n    if(best_start<best){\n      /*Retrieve the page at best_start, if we do not already have it.*/\n      if(page_offset<0){\n        page_offset=op_get_next_page(_of,&og,link->end_offset);\n        if(OP_UNLIKELY(page_offset<OP_FALSE))return (int)page_offset;\n        if(OP_UNLIKELY(page_offset!=best_start))return OP_EBADLINK;\n      }\n      op_buffer_continued_data(_of,&og);\n      page_offset=-1;\n    }\n  }\n  /*Update prev_packet_gp to allow per-packet granule position assignment.*/\n  _of->prev_packet_gp=best_gp;\n  _of->prev_page_offset=best_start;\n  ret=op_fetch_and_process_page(_of,page_offset<0?NULL:&og,page_offset,0,1);\n  if(OP_UNLIKELY(ret<0))return OP_EBADLINK;\n  /*Verify result.*/\n  if(OP_UNLIKELY(op_granpos_cmp(_of->prev_packet_gp,_target_gp)>0)){\n    return OP_EBADLINK;\n  }\n  /*Our caller will set cur_discard_count to handle pre-roll.*/\n  return 0;\n}\n\nint op_pcm_seek(OggOpusFile *_of,ogg_int64_t _pcm_offset){\n  const OggOpusLink *link;\n  ogg_int64_t        pcm_start;\n  ogg_int64_t        target_gp;\n  ogg_int64_t        prev_packet_gp;\n  ogg_int64_t        skip;\n  ogg_int64_t        diff;\n  int                op_count;\n  int                op_pos;\n  int                ret;\n  int                li;\n  if(OP_UNLIKELY(_of->ready_state<OP_OPENED))return OP_EINVAL;\n  if(OP_UNLIKELY(!_of->seekable))return OP_ENOSEEK;\n  if(OP_UNLIKELY(_pcm_offset<0))return OP_EINVAL;\n  target_gp=op_get_granulepos(_of,_pcm_offset,&li);\n  if(OP_UNLIKELY(target_gp==-1))return OP_EINVAL;\n  link=_of->links+li;\n  pcm_start=link->pcm_start;\n  OP_ALWAYS_TRUE(!op_granpos_diff(&_pcm_offset,target_gp,pcm_start));\n#if !defined(OP_SMALL_FOOTPRINT)\n  /*For small (90 ms or less) forward seeks within the same link, just decode\n     forward.\n    This also optimizes the case of seeking to the current position.*/\n  if(li==_of->cur_link&&_of->ready_state>=OP_INITSET){\n    ogg_int64_t gp;\n    gp=_of->prev_packet_gp;\n    if(OP_LIKELY(gp!=-1)){\n      int nbuffered;\n      nbuffered=OP_MAX(_of->od_buffer_size-_of->od_buffer_pos,0);\n      OP_ALWAYS_TRUE(!op_granpos_add(&gp,gp,-nbuffered));\n      /*We do _not_ add cur_discard_count to gp.\n        Otherwise the total amount to discard could grow without bound, and it\n         would be better just to do a full seek.*/\n      if(OP_LIKELY(!op_granpos_diff(&diff,gp,pcm_start))){\n        ogg_int64_t discard_count;\n        discard_count=_pcm_offset-diff;\n        /*We use a threshold of 90 ms instead of 80, since 80 ms is the\n           _minimum_ we would have discarded after a full seek.\n          Assuming 20 ms frames (the default), we'd discard 90 ms on average.*/\n        if(discard_count>=0&&OP_UNLIKELY(discard_count<90*48)){\n          _of->cur_discard_count=(opus_int32)discard_count;\n          return 0;\n        }\n      }\n    }\n  }\n#endif\n  ret=op_pcm_seek_page(_of,target_gp,li);\n  if(OP_UNLIKELY(ret<0))return ret;\n  /*Now skip samples until we actually get to our target.*/\n  /*Figure out where we should skip to.*/\n  if(_pcm_offset<=link->head.pre_skip)skip=0;\n  else skip=OP_MAX(_pcm_offset-80*48,0);\n  OP_ASSERT(_pcm_offset-skip>=0);\n  OP_ASSERT(_pcm_offset-skip<OP_INT32_MAX-120*48);\n  /*Skip packets until we find one with samples past our skip target.*/\n  for(;;){\n    op_count=_of->op_count;\n    prev_packet_gp=_of->prev_packet_gp;\n    for(op_pos=_of->op_pos;op_pos<op_count;op_pos++){\n      ogg_int64_t cur_packet_gp;\n      cur_packet_gp=_of->op[op_pos].granulepos;\n      if(OP_LIKELY(!op_granpos_diff(&diff,cur_packet_gp,pcm_start))\n       &&diff>skip){\n        break;\n      }\n      prev_packet_gp=cur_packet_gp;\n    }\n    _of->prev_packet_gp=prev_packet_gp;\n    _of->op_pos=op_pos;\n    if(op_pos<op_count)break;\n    /*We skipped all the packets on this page.\n      Fetch another.*/\n    ret=op_fetch_and_process_page(_of,NULL,-1,0,1);\n    if(OP_UNLIKELY(ret<0))return OP_EBADLINK;\n  }\n  OP_ALWAYS_TRUE(!op_granpos_diff(&diff,prev_packet_gp,pcm_start));\n  /*We skipped too far.\n    Either the timestamps were illegal or there was a hole in the data.*/\n  if(diff>skip)return OP_EBADLINK;\n  OP_ASSERT(_pcm_offset-diff<OP_INT32_MAX);\n  /*TODO: If there are further holes/illegal timestamps, we still won't decode\n     to the correct sample.\n    However, at least op_pcm_tell() will report the correct value immediately\n     after returning.*/\n  _of->cur_discard_count=(opus_int32)(_pcm_offset-diff);\n  return 0;\n}\n\nopus_int64 op_raw_tell(const OggOpusFile *_of){\n  if(OP_UNLIKELY(_of->ready_state<OP_OPENED))return OP_EINVAL;\n  return _of->offset;\n}\n\n/*Convert a granule position from a given link to a PCM offset relative to the\n   start of the whole stream.\n  For unseekable sources, this gets reset to 0 at the beginning of each link.*/\nstatic ogg_int64_t op_get_pcm_offset(const OggOpusFile *_of,\n ogg_int64_t _gp,int _li){\n  const OggOpusLink *links;\n  ogg_int64_t        pcm_offset;\n  links=_of->links;\n  OP_ASSERT(_li>=0&&_li<_of->nlinks);\n  pcm_offset=links[_li].pcm_file_offset;\n  if(_of->seekable&&OP_UNLIKELY(op_granpos_cmp(_gp,links[_li].pcm_end)>0)){\n    _gp=links[_li].pcm_end;\n  }\n  if(OP_LIKELY(op_granpos_cmp(_gp,links[_li].pcm_start)>0)){\n    ogg_int64_t delta;\n    if(OP_UNLIKELY(op_granpos_diff(&delta,_gp,links[_li].pcm_start)<0)){\n      /*This means an unseekable stream claimed to have a page from more than\n         2 billion days after we joined.*/\n      OP_ASSERT(!_of->seekable);\n      return OP_INT64_MAX;\n    }\n    if(delta<links[_li].head.pre_skip)delta=0;\n    else delta-=links[_li].head.pre_skip;\n    /*In the seekable case, _gp was limited by pcm_end.\n      In the unseekable case, pcm_offset should be 0.*/\n    OP_ASSERT(pcm_offset<=OP_INT64_MAX-delta);\n    pcm_offset+=delta;\n  }\n  return pcm_offset;\n}\n\nogg_int64_t op_pcm_tell(const OggOpusFile *_of){\n  ogg_int64_t gp;\n  int         nbuffered;\n  int         li;\n  if(OP_UNLIKELY(_of->ready_state<OP_OPENED))return OP_EINVAL;\n  gp=_of->prev_packet_gp;\n  if(gp==-1)return 0;\n  nbuffered=OP_MAX(_of->od_buffer_size-_of->od_buffer_pos,0);\n  OP_ALWAYS_TRUE(!op_granpos_add(&gp,gp,-nbuffered));\n  li=_of->seekable?_of->cur_link:0;\n  if(op_granpos_add(&gp,gp,_of->cur_discard_count)<0){\n    gp=_of->links[li].pcm_end;\n  }\n  return op_get_pcm_offset(_of,gp,li);\n}\n\nvoid op_set_decode_callback(OggOpusFile *_of,\n op_decode_cb_func _decode_cb,void *_ctx){\n  _of->decode_cb=_decode_cb;\n  _of->decode_cb_ctx=_ctx;\n}\n\nint op_set_gain_offset(OggOpusFile *_of,\n int _gain_type,opus_int32 _gain_offset_q8){\n  if(_gain_type!=OP_HEADER_GAIN&&_gain_type!=OP_ALBUM_GAIN\n   &&_gain_type!=OP_TRACK_GAIN&&_gain_type!=OP_ABSOLUTE_GAIN){\n    return OP_EINVAL;\n  }\n  _of->gain_type=_gain_type;\n  /*The sum of header gain and track gain lies in the range [-65536,65534].\n    These bounds allow the offset to set the final value to anywhere in the\n     range [-32768,32767], which is what we'll clamp it to before applying.*/\n  _of->gain_offset_q8=OP_CLAMP(-98302,_gain_offset_q8,98303);\n  op_update_gain(_of);\n  return 0;\n}\n\nvoid op_set_dither_enabled(OggOpusFile *_of,int _enabled){\n#if !defined(OP_FIXED_POINT)\n  _of->dither_disabled=!_enabled;\n  if(!_enabled)_of->dither_mute=65;\n#endif\n}\n\n/*Allocate the decoder scratch buffer.\n  This is done lazily, since if the user provides large enough buffers, we'll\n   never need it.*/\nstatic int op_init_buffer(OggOpusFile *_of){\n  int nchannels_max;\n  if(_of->seekable){\n    const OggOpusLink *links;\n    int                nlinks;\n    int                li;\n    links=_of->links;\n    nlinks=_of->nlinks;\n    nchannels_max=1;\n    for(li=0;li<nlinks;li++){\n      nchannels_max=OP_MAX(nchannels_max,links[li].head.channel_count);\n    }\n  }\n  else nchannels_max=OP_NCHANNELS_MAX;\n  _of->od_buffer=(op_sample *)_ogg_malloc(\n   sizeof(*_of->od_buffer)*nchannels_max*120*48);\n  if(_of->od_buffer==NULL)return OP_EFAULT;\n  return 0;\n}\n\n/*Decode a single packet into the target buffer.*/\nstatic int op_decode(OggOpusFile *_of,op_sample *_pcm,\n const ogg_packet *_op,int _nsamples,int _nchannels){\n  int ret;\n  /*First we try using the application-provided decode callback.*/\n  if(_of->decode_cb!=NULL){\n#if defined(OP_FIXED_POINT)\n    ret=(*_of->decode_cb)(_of->decode_cb_ctx,_of->od,_pcm,_op,\n     _nsamples,_nchannels,OP_DEC_FORMAT_SHORT,_of->cur_link);\n#else\n    ret=(*_of->decode_cb)(_of->decode_cb_ctx,_of->od,_pcm,_op,\n     _nsamples,_nchannels,OP_DEC_FORMAT_FLOAT,_of->cur_link);\n#endif\n  }\n  else ret=OP_DEC_USE_DEFAULT;\n  /*If the application didn't want to handle decoding, do it ourselves.*/\n  if(ret==OP_DEC_USE_DEFAULT){\n#if defined(OP_FIXED_POINT)\n    ret=opus_multistream_decode(_of->od,\n     _op->packet,_op->bytes,_pcm,_nsamples,0);\n#else\n    ret=opus_multistream_decode_float(_of->od,\n     _op->packet,_op->bytes,_pcm,_nsamples,0);\n#endif\n    OP_ASSERT(ret<0||ret==_nsamples);\n  }\n  /*If the application returned a positive value other than 0 or\n     OP_DEC_USE_DEFAULT, fail.*/\n  else if(OP_UNLIKELY(ret>0))return OP_EBADPACKET;\n  if(OP_UNLIKELY(ret<0))return OP_EBADPACKET;\n  return ret;\n}\n\n/*Read more samples from the stream, using the same API as op_read() or\n   op_read_float().*/\nstatic int op_read_native(OggOpusFile *_of,\n op_sample *_pcm,int _buf_size,int *_li){\n  if(OP_UNLIKELY(_of->ready_state<OP_OPENED))return OP_EINVAL;\n  for(;;){\n    int ret;\n    if(OP_LIKELY(_of->ready_state>=OP_INITSET)){\n      int nchannels;\n      int od_buffer_pos;\n      int nsamples;\n      int op_pos;\n      nchannels=_of->links[_of->seekable?_of->cur_link:0].head.channel_count;\n      od_buffer_pos=_of->od_buffer_pos;\n      nsamples=_of->od_buffer_size-od_buffer_pos;\n      /*If we have buffered samples, return them.*/\n      if(nsamples>0){\n        if(nsamples*nchannels>_buf_size)nsamples=_buf_size/nchannels;\n        memcpy(_pcm,_of->od_buffer+nchannels*od_buffer_pos,\n         sizeof(*_pcm)*nchannels*nsamples);\n        od_buffer_pos+=nsamples;\n        _of->od_buffer_pos=od_buffer_pos;\n        if(_li!=NULL)*_li=_of->cur_link;\n        return nsamples;\n      }\n      /*If we have buffered packets, decode one.*/\n      op_pos=_of->op_pos;\n      if(OP_LIKELY(op_pos<_of->op_count)){\n        const ogg_packet *pop;\n        ogg_int64_t       diff;\n        opus_int32        cur_discard_count;\n        int               duration;\n        int               trimmed_duration;\n        pop=_of->op+op_pos++;\n        _of->op_pos=op_pos;\n        cur_discard_count=_of->cur_discard_count;\n        duration=op_get_packet_duration(pop->packet,pop->bytes);\n        /*We don't buffer packets with an invalid TOC sequence.*/\n        OP_ASSERT(duration>0);\n        trimmed_duration=duration;\n        /*Perform end-trimming.*/\n        if(OP_UNLIKELY(pop->e_o_s)){\n          if(OP_UNLIKELY(op_granpos_cmp(pop->granulepos,\n           _of->prev_packet_gp)<=0)){\n            trimmed_duration=0;\n          }\n          else if(OP_LIKELY(!op_granpos_diff(&diff,\n           pop->granulepos,_of->prev_packet_gp))){\n            trimmed_duration=(int)OP_MIN(diff,trimmed_duration);\n          }\n        }\n        _of->prev_packet_gp=pop->granulepos;\n        if(OP_UNLIKELY(duration*nchannels>_buf_size)){\n          op_sample *buf;\n          /*If the user's buffer is too small, decode into a scratch buffer.*/\n          buf=_of->od_buffer;\n          if(OP_UNLIKELY(buf==NULL)){\n            ret=op_init_buffer(_of);\n            if(OP_UNLIKELY(ret<0))return ret;\n            buf=_of->od_buffer;\n          }\n          ret=op_decode(_of,buf,pop,duration,nchannels);\n          if(OP_UNLIKELY(ret<0))return ret;\n          /*Perform pre-skip/pre-roll.*/\n          od_buffer_pos=(int)OP_MIN(trimmed_duration,cur_discard_count);\n          cur_discard_count-=od_buffer_pos;\n          _of->cur_discard_count=cur_discard_count;\n          _of->od_buffer_pos=od_buffer_pos;\n          _of->od_buffer_size=trimmed_duration;\n          /*Update bitrate tracking based on the actual samples we used from\n             what was decoded.*/\n          _of->bytes_tracked+=pop->bytes;\n          _of->samples_tracked+=trimmed_duration-od_buffer_pos;\n        }\n        else{\n          /*Otherwise decode directly into the user's buffer.*/\n          ret=op_decode(_of,_pcm,pop,duration,nchannels);\n          if(OP_UNLIKELY(ret<0))return ret;\n          if(OP_LIKELY(trimmed_duration>0)){\n            /*Perform pre-skip/pre-roll.*/\n            od_buffer_pos=(int)OP_MIN(trimmed_duration,cur_discard_count);\n            cur_discard_count-=od_buffer_pos;\n            _of->cur_discard_count=cur_discard_count;\n            trimmed_duration-=od_buffer_pos;\n            if(OP_LIKELY(trimmed_duration>0)\n             &&OP_UNLIKELY(od_buffer_pos>0)){\n              memmove(_pcm,_pcm+od_buffer_pos*nchannels,\n               sizeof(*_pcm)*trimmed_duration*nchannels);\n            }\n            /*Update bitrate tracking based on the actual samples we used from\n               what was decoded.*/\n            _of->bytes_tracked+=pop->bytes;\n            _of->samples_tracked+=trimmed_duration;\n            if(OP_LIKELY(trimmed_duration>0)){\n              if(_li!=NULL)*_li=_of->cur_link;\n              return trimmed_duration;\n            }\n          }\n        }\n        /*Don't grab another page yet.\n          This one might have more packets, or might have buffered data now.*/\n        continue;\n      }\n    }\n    /*Suck in another page.*/\n    ret=op_fetch_and_process_page(_of,NULL,-1,1,0);\n    if(OP_UNLIKELY(ret==OP_EOF)){\n      if(_li!=NULL)*_li=_of->cur_link;\n      return 0;\n    }\n    if(OP_UNLIKELY(ret<0))return ret;\n  }\n}\n\n/*A generic filter to apply to the decoded audio data.\n  _src is non-const because we will destructively modify the contents of the\n   source buffer that we consume in some cases.*/\ntypedef int (*op_read_filter_func)(OggOpusFile *_of,void *_dst,int _dst_sz,\n op_sample *_src,int _nsamples,int _nchannels);\n\n/*Decode some samples and then apply a custom filter to them.\n  This is used to convert to different output formats.*/\nstatic int op_filter_read_native(OggOpusFile *_of,void *_dst,int _dst_sz,\n op_read_filter_func _filter,int *_li){\n  int ret;\n  /*Ensure we have some decoded samples in our buffer.*/\n  ret=op_read_native(_of,NULL,0,_li);\n  /*Now apply the filter to them.*/\n  if(OP_LIKELY(ret>=0)&&OP_LIKELY(_of->ready_state>=OP_INITSET)){\n    int od_buffer_pos;\n    od_buffer_pos=_of->od_buffer_pos;\n    ret=_of->od_buffer_size-od_buffer_pos;\n    if(OP_LIKELY(ret>0)){\n      int nchannels;\n      nchannels=_of->links[_of->seekable?_of->cur_link:0].head.channel_count;\n      ret=(*_filter)(_of,_dst,_dst_sz,\n       _of->od_buffer+nchannels*od_buffer_pos,ret,nchannels);\n      OP_ASSERT(ret>=0);\n      OP_ASSERT(ret<=_of->od_buffer_size-od_buffer_pos);\n      od_buffer_pos+=ret;\n      _of->od_buffer_pos=od_buffer_pos;\n    }\n  }\n  return ret;\n}\n\n#if !defined(OP_FIXED_POINT)||!defined(OP_DISABLE_FLOAT_API)\n\n/*Matrices for downmixing from the supported channel counts to stereo.\n  The matrices with 5 or more channels are normalized to a total volume of 2.0,\n   since most mixes sound too quiet if normalized to 1.0 (as there is generally\n   little volume in the side/rear channels).*/\nstatic const float OP_STEREO_DOWNMIX[OP_NCHANNELS_MAX-2][OP_NCHANNELS_MAX][2]={\n  /*3.0*/\n  {\n    {0.5858F,0.0F},{0.4142F,0.4142F},{0.0F,0.5858F}\n  },\n  /*quadrophonic*/\n  {\n    {0.4226F,0.0F},{0.0F,0.4226F},{0.366F,0.2114F},{0.2114F,0.336F}\n  },\n  /*5.0*/\n  {\n    {0.651F,0.0F},{0.46F,0.46F},{0.0F,0.651F},{0.5636F,0.3254F},\n    {0.3254F,0.5636F}\n  },\n  /*5.1*/\n  {\n    {0.529F,0.0F},{0.3741F,0.3741F},{0.0F,0.529F},{0.4582F,0.2645F},\n    {0.2645F,0.4582F},{0.3741F,0.3741F}\n  },\n  /*6.1*/\n  {\n    {0.4553F,0.0F},{0.322F,0.322F},{0.0F,0.4553F},{0.3943F,0.2277F},\n    {0.2277F,0.3943F},{0.2788F,0.2788F},{0.322F,0.322F}\n  },\n  /*7.1*/\n  {\n    {0.3886F,0.0F},{0.2748F,0.2748F},{0.0F,0.3886F},{0.3366F,0.1943F},\n    {0.1943F,0.3366F},{0.3366F,0.1943F},{0.1943F,0.3366F},{0.2748F,0.2748F}\n  }\n};\n\n#endif\n\n#if defined(OP_FIXED_POINT)\n\n/*Matrices for downmixing from the supported channel counts to stereo.\n  The matrices with 5 or more channels are normalized to a total volume of 2.0,\n   since most mixes sound too quiet if normalized to 1.0 (as there is generally\n   little volume in the side/rear channels).\n  Hence we keep the coefficients in Q14, so the downmix values won't overflow a\n   32-bit number.*/\nstatic const opus_int16 OP_STEREO_DOWNMIX_Q14\n [OP_NCHANNELS_MAX-2][OP_NCHANNELS_MAX][2]={\n  /*3.0*/\n  {\n    {9598,0},{6786,6786},{0,9598}\n  },\n  /*quadrophonic*/\n  {\n    {6924,0},{0,6924},{5996,3464},{3464,5996}\n  },\n  /*5.0*/\n  {\n    {10666,0},{7537,7537},{0,10666},{9234,5331},{5331,9234}\n  },\n  /*5.1*/\n  {\n    {8668,0},{6129,6129},{0,8668},{7507,4335},{4335,7507},{6129,6129}\n  },\n  /*6.1*/\n  {\n    {7459,0},{5275,5275},{0,7459},{6460,3731},{3731,6460},{4568,4568},\n    {5275,5275}\n  },\n  /*7.1*/\n  {\n    {6368,0},{4502,4502},{0,6368},{5515,3183},{3183,5515},{5515,3183},\n    {3183,5515},{4502,4502}\n  }\n};\n\nint op_read(OggOpusFile *_of,opus_int16 *_pcm,int _buf_size,int *_li){\n  return op_read_native(_of,_pcm,_buf_size,_li);\n}\n\nstatic int op_stereo_filter(OggOpusFile *_of,void *_dst,int _dst_sz,\n op_sample *_src,int _nsamples,int _nchannels){\n  (void)_of;\n  _nsamples=OP_MIN(_nsamples,_dst_sz>>1);\n  if(_nchannels==2)memcpy(_dst,_src,_nsamples*2*sizeof(*_src));\n  else{\n    opus_int16 *dst;\n    int         i;\n    dst=(opus_int16 *)_dst;\n    if(_nchannels==1){\n      for(i=0;i<_nsamples;i++)dst[2*i+0]=dst[2*i+1]=_src[i];\n    }\n    else{\n      for(i=0;i<_nsamples;i++){\n        opus_int32 l;\n        opus_int32 r;\n        int        ci;\n        l=r=0;\n        for(ci=0;ci<_nchannels;ci++){\n          opus_int32 s;\n          s=_src[_nchannels*i+ci];\n          l+=OP_STEREO_DOWNMIX_Q14[_nchannels-3][ci][0]*s;\n          r+=OP_STEREO_DOWNMIX_Q14[_nchannels-3][ci][1]*s;\n        }\n        /*TODO: For 5 or more channels, we should do soft clipping here.*/\n        dst[2*i+0]=(opus_int16)OP_CLAMP(-32768,l+8192>>14,32767);\n        dst[2*i+1]=(opus_int16)OP_CLAMP(-32768,r+8192>>14,32767);\n      }\n    }\n  }\n  return _nsamples;\n}\n\nint op_read_stereo(OggOpusFile *_of,opus_int16 *_pcm,int _buf_size){\n  return op_filter_read_native(_of,_pcm,_buf_size,op_stereo_filter,NULL);\n}\n\n# if !defined(OP_DISABLE_FLOAT_API)\n\nstatic int op_short2float_filter(OggOpusFile *_of,void *_dst,int _dst_sz,\n op_sample *_src,int _nsamples,int _nchannels){\n  float *dst;\n  int    i;\n  (void)_of;\n  dst=(float *)_dst;\n  if(OP_UNLIKELY(_nsamples*_nchannels>_dst_sz))_nsamples=_dst_sz/_nchannels;\n  _dst_sz=_nsamples*_nchannels;\n  for(i=0;i<_dst_sz;i++)dst[i]=(1.0F/32768)*_src[i];\n  return _nsamples;\n}\n\nint op_read_float(OggOpusFile *_of,float *_pcm,int _buf_size,int *_li){\n  return op_filter_read_native(_of,_pcm,_buf_size,op_short2float_filter,_li);\n}\n\nstatic int op_short2float_stereo_filter(OggOpusFile *_of,\n void *_dst,int _dst_sz,op_sample *_src,int _nsamples,int _nchannels){\n  float *dst;\n  int    i;\n  dst=(float *)_dst;\n  _nsamples=OP_MIN(_nsamples,_dst_sz>>1);\n  if(_nchannels==1){\n    _nsamples=op_short2float_filter(_of,dst,_nsamples,_src,_nsamples,1);\n    for(i=_nsamples;i-->0;)dst[2*i+0]=dst[2*i+1]=dst[i];\n  }\n  else if(_nchannels<5){\n    /*For 3 or 4 channels, we can downmix in fixed point without risk of\n       clipping.*/\n    if(_nchannels>2){\n      _nsamples=op_stereo_filter(_of,_src,_nsamples*2,\n       _src,_nsamples,_nchannels);\n    }\n    return op_short2float_filter(_of,dst,_dst_sz,_src,_nsamples,2);\n  }\n  else{\n    /*For 5 or more channels, we convert to floats and then downmix (so that we\n       don't risk clipping).*/\n    for(i=0;i<_nsamples;i++){\n      float l;\n      float r;\n      int   ci;\n      l=r=0;\n      for(ci=0;ci<_nchannels;ci++){\n        float s;\n        s=(1.0F/32768)*_src[_nchannels*i+ci];\n        l+=OP_STEREO_DOWNMIX[_nchannels-3][ci][0]*s;\n        r+=OP_STEREO_DOWNMIX[_nchannels-3][ci][1]*s;\n      }\n      dst[2*i+0]=l;\n      dst[2*i+1]=r;\n    }\n  }\n  return _nsamples;\n}\n\nint op_read_float_stereo(OggOpusFile *_of,float *_pcm,int _buf_size){\n  return op_filter_read_native(_of,_pcm,_buf_size,\n   op_short2float_stereo_filter,NULL);\n}\n\n# endif\n\n#else\n\n# if defined(OP_HAVE_LRINTF)\n#  include <math.h>\n#  define op_float2int(_x) (lrintf(_x))\n# else\n#  define op_float2int(_x) ((int)((_x)+((_x)<0?-0.5F:0.5F)))\n# endif\n\n/*The dithering code here is adapted from opusdec, part of opus-tools.\n  It was originally written by Greg Maxwell.*/\n\nstatic opus_uint32 op_rand(opus_uint32 _seed){\n  return _seed*96314165+907633515&0xFFFFFFFFU;\n}\n\n/*This implements 16-bit quantization with full triangular dither and IIR noise\n   shaping.\n  The noise shaping filters were designed by Sebastian Gesemann, and are based\n   on the LAME ATH curves with flattening to limit their peak gain to 20 dB.\n  Everyone else's noise shaping filters are mildly crazy.\n  The 48 kHz version of this filter is just a warped version of the 44.1 kHz\n   filter and probably could be improved by shifting the HF shelf up in\n   frequency a little bit, since 48 kHz has a bit more room and being more\n   conservative against bat-ears is probably more important than more noise\n   suppression.\n  This process can increase the peak level of the signal (in theory by the peak\n   error of 1.5 +20 dB, though that is unobservably rare).\n  To avoid clipping, the signal is attenuated by a couple thousandths of a dB.\n  Initially, the approach taken here was to only attenuate by the 99.9th\n   percentile, making clipping rare but not impossible (like SoX), but the\n   limited gain of the filter means that the worst case was only two\n   thousandths of a dB more, so this just uses the worst case.\n  The attenuation is probably also helpful to prevent clipping in the DAC\n   reconstruction filters or downstream resampling, in any case.*/\n\n# define OP_GAIN (32753.0F)\n\n# define OP_PRNG_GAIN (1.0F/0xFFFFFFFF)\n\n/*48 kHz noise shaping filter, sd=2.34.*/\n\nstatic const float OP_FCOEF_B[4]={\n  2.2374F,-0.7339F,-0.1251F,-0.6033F\n};\n\nstatic const float OP_FCOEF_A[4]={\n  0.9030F,0.0116F,-0.5853F,-0.2571F\n};\n\nstatic int op_float2short_filter(OggOpusFile *_of,void *_dst,int _dst_sz,\n float *_src,int _nsamples,int _nchannels){\n  opus_int16 *dst;\n  int         ci;\n  int         i;\n  dst=(opus_int16 *)_dst;\n  if(OP_UNLIKELY(_nsamples*_nchannels>_dst_sz))_nsamples=_dst_sz/_nchannels;\n# if defined(OP_SOFT_CLIP)\n  if(_of->state_channel_count!=_nchannels){\n    for(ci=0;ci<_nchannels;ci++)_of->clip_state[ci]=0;\n  }\n  opus_pcm_soft_clip(_src,_nsamples,_nchannels,_of->clip_state);\n# endif\n  if(_of->dither_disabled){\n    for(i=0;i<_nchannels*_nsamples;i++){\n      dst[i]=op_float2int(OP_CLAMP(-32768,32768.0F*_src[i],32767));\n    }\n  }\n  else{\n    opus_uint32 seed;\n    int         mute;\n    seed=_of->dither_seed;\n    mute=_of->dither_mute;\n    if(_of->state_channel_count!=_nchannels)mute=65;\n    /*In order to avoid replacing digital silence with quiet dither noise, we\n       mute if the output has been silent for a while.*/\n    if(mute>64)memset(_of->dither_a,0,sizeof(*_of->dither_a)*4*_nchannels);\n    for(i=0;i<_nsamples;i++){\n      int silent;\n      silent=1;\n      for(ci=0;ci<_nchannels;ci++){\n        float r;\n        float s;\n        float err;\n        int   si;\n        int   j;\n        s=_src[_nchannels*i+ci];\n        silent&=s==0;\n        s*=OP_GAIN;\n        err=0;\n        for(j=0;j<4;j++){\n          err+=OP_FCOEF_B[j]*_of->dither_b[ci*4+j]\n           -OP_FCOEF_A[j]*_of->dither_a[ci*4+j];\n        }\n        for(j=3;j-->0;)_of->dither_a[ci*4+j+1]=_of->dither_a[ci*4+j];\n        for(j=3;j-->0;)_of->dither_b[ci*4+j+1]=_of->dither_b[ci*4+j];\n        _of->dither_a[ci*4]=err;\n        s-=err;\n        if(mute>16)r=0;\n        else{\n          seed=op_rand(seed);\n          r=seed*OP_PRNG_GAIN;\n          seed=op_rand(seed);\n          r-=seed*OP_PRNG_GAIN;\n        }\n        /*Clamp in float out of paranoia that the input will be > 96 dBFS and\n           wrap if the integer is clamped.*/\n        si=op_float2int(OP_CLAMP(-32768,s+r,32767));\n        dst[_nchannels*i+ci]=(opus_int16)si;\n        /*Including clipping in the noise shaping is generally disastrous: the\n           futile effort to restore the clipped energy results in more clipping.\n          However, small amounts---at the level which could normally be created\n           by dither and rounding---are harmless and can even reduce clipping\n           somewhat due to the clipping sometimes reducing the dither + rounding\n           error.*/\n        _of->dither_b[ci*4]=mute>16?0:OP_CLAMP(-1.5F,si-s,1.5F);\n      }\n      mute++;\n      if(!silent)mute=0;\n    }\n    _of->dither_mute=OP_MIN(mute,65);\n    _of->dither_seed=seed;\n  }\n  _of->state_channel_count=_nchannels;\n  return _nsamples;\n}\n\nint op_read(OggOpusFile *_of,opus_int16 *_pcm,int _buf_size,int *_li){\n  return op_filter_read_native(_of,_pcm,_buf_size,op_float2short_filter,_li);\n}\n\nint op_read_float(OggOpusFile *_of,float *_pcm,int _buf_size,int *_li){\n  _of->state_channel_count=0;\n  return op_read_native(_of,_pcm,_buf_size,_li);\n}\n\nstatic int op_stereo_filter(OggOpusFile *_of,void *_dst,int _dst_sz,\n op_sample *_src,int _nsamples,int _nchannels){\n  (void)_of;\n  _nsamples=OP_MIN(_nsamples,_dst_sz>>1);\n  if(_nchannels==2)memcpy(_dst,_src,_nsamples*2*sizeof(*_src));\n  else{\n    float *dst;\n    int    i;\n    dst=(float *)_dst;\n    if(_nchannels==1){\n      for(i=0;i<_nsamples;i++)dst[2*i+0]=dst[2*i+1]=_src[i];\n    }\n    else{\n      for(i=0;i<_nsamples;i++){\n        float l;\n        float r;\n        int   ci;\n        l=r=0;\n        for(ci=0;ci<_nchannels;ci++){\n          l+=OP_STEREO_DOWNMIX[_nchannels-3][ci][0]*_src[_nchannels*i+ci];\n          r+=OP_STEREO_DOWNMIX[_nchannels-3][ci][1]*_src[_nchannels*i+ci];\n        }\n        dst[2*i+0]=l;\n        dst[2*i+1]=r;\n      }\n    }\n  }\n  return _nsamples;\n}\n\nstatic int op_float2short_stereo_filter(OggOpusFile *_of,\n void *_dst,int _dst_sz,op_sample *_src,int _nsamples,int _nchannels){\n  opus_int16 *dst;\n  dst=(opus_int16 *)_dst;\n  if(_nchannels==1){\n    int i;\n    _nsamples=op_float2short_filter(_of,dst,_dst_sz>>1,_src,_nsamples,1);\n    for(i=_nsamples;i-->0;)dst[2*i+0]=dst[2*i+1]=dst[i];\n  }\n  else{\n    if(_nchannels>2){\n      _nsamples=OP_MIN(_nsamples,_dst_sz>>1);\n      _nsamples=op_stereo_filter(_of,_src,_nsamples*2,\n       _src,_nsamples,_nchannels);\n    }\n    _nsamples=op_float2short_filter(_of,dst,_dst_sz,_src,_nsamples,2);\n  }\n  return _nsamples;\n}\n\nint op_read_stereo(OggOpusFile *_of,opus_int16 *_pcm,int _buf_size){\n  return op_filter_read_native(_of,_pcm,_buf_size,\n   op_float2short_stereo_filter,NULL);\n}\n\nint op_read_float_stereo(OggOpusFile *_of,float *_pcm,int _buf_size){\n  _of->state_channel_count=0;\n  return op_filter_read_native(_of,_pcm,_buf_size,op_stereo_filter,NULL);\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/opusfile-0.9/src/stream.c",
    "content": "/********************************************************************\n *                                                                  *\n * THIS FILE IS PART OF THE libopusfile SOFTWARE CODEC SOURCE CODE. *\n * USE, DISTRIBUTION AND REPRODUCTION OF THIS LIBRARY SOURCE IS     *\n * GOVERNED BY A BSD-STYLE SOURCE LICENSE INCLUDED WITH THIS SOURCE *\n * IN 'COPYING'. PLEASE READ THESE TERMS BEFORE DISTRIBUTING.       *\n *                                                                  *\n * THE libopusfile SOURCE CODE IS (C) COPYRIGHT 1994-2012           *\n * by the Xiph.Org Foundation and contributors http://www.xiph.org/ *\n *                                                                  *\n ********************************************************************\n\n function: stdio-based convenience library for opening/seeking/decoding\n last mod: $Id: vorbisfile.c 17573 2010-10-27 14:53:59Z xiphmont $\n\n ********************************************************************/\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include \"internal.h\"\n#include <sys/types.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <errno.h>\n#include <string.h>\n#if defined(_WIN32)\n# include <io.h>\n#endif\n\ntypedef struct OpusMemStream OpusMemStream;\n\n#define OP_MEM_SIZE_MAX (~(size_t)0>>1)\n#define OP_MEM_DIFF_MAX ((ptrdiff_t)OP_MEM_SIZE_MAX)\n\n/*The context information needed to read from a block of memory as if it were a\n   file.*/\nstruct OpusMemStream{\n  /*The block of memory to read from.*/\n  const unsigned char *data;\n  /*The total size of the block.\n    This must be at most OP_MEM_SIZE_MAX to prevent signed overflow while\n     seeking.*/\n  ptrdiff_t            size;\n  /*The current file position.\n    This is allowed to be set arbitrarily greater than size (i.e., past the end\n     of the block, though we will not read data past the end of the block), but\n     is not allowed to be negative (i.e., before the beginning of the block).*/\n  ptrdiff_t            pos;\n};\n\nstatic int op_fread(void *_stream,unsigned char *_ptr,int _buf_size){\n  FILE   *stream;\n  size_t  ret;\n  /*Check for empty read.*/\n  if(_buf_size<=0)return 0;\n  stream=(FILE *)_stream;\n  ret=fread(_ptr,1,_buf_size,stream);\n  OP_ASSERT(ret<=(size_t)_buf_size);\n  /*If ret==0 and !feof(stream), there was a read error.*/\n  return ret>0||feof(stream)?(int)ret:OP_EREAD;\n}\n\nstatic int op_fseek(void *_stream,opus_int64 _offset,int _whence){\n#if defined(_WIN32)\n  /*_fseeki64() is not exposed until MSCVCRT80.\n    This is the default starting with MSVC 2005 (_MSC_VER>=1400), but we want\n     to allow linking against older MSVCRT versions for compatibility back to\n     XP without installing extra runtime libraries.\n    i686-pc-mingw32 does not have fseeko() and requires\n     __MSVCRT_VERSION__>=0x800 for _fseeki64(), which screws up linking with\n     other libraries (that don't use MSVCRT80 from MSVC 2005 by default).\n    i686-w64-mingw32 does have fseeko() and respects _FILE_OFFSET_BITS, but I\n     don't know how to detect that at compile time.\n    We could just use fseeko64() (which is available in both), but its\n     implemented using fgetpos()/fsetpos() just like this code, except without\n     the overflow checking, so we prefer our version.*/\n  opus_int64 pos;\n  /*We don't use fpos_t directly because it might be a struct if __STDC__ is\n     non-zero or _INTEGRAL_MAX_BITS < 64.\n    I'm not certain when the latter is true, but someone could in theory set\n     the former.\n    Either way, it should be binary compatible with a normal 64-bit int (this\n     assumption is not portable, but I believe it is true for MSVCRT).*/\n  OP_ASSERT(sizeof(pos)==sizeof(fpos_t));\n  /*Translate the seek to an absolute one.*/\n  if(_whence==SEEK_CUR){\n    int ret;\n    ret=fgetpos((FILE *)_stream,(fpos_t *)&pos);\n    if(ret)return ret;\n  }\n  else if(_whence==SEEK_END)pos=_filelengthi64(_fileno((FILE *)_stream));\n  else if(_whence==SEEK_SET)pos=0;\n  else return -1;\n  /*Check for errors or overflow.*/\n  if(pos<0||_offset<-pos||_offset>OP_INT64_MAX-pos)return -1;\n  pos+=_offset;\n  return fsetpos((FILE *)_stream,(fpos_t *)&pos);\n#else\n  /*This function actually conforms to the SUSv2 and POSIX.1-2001, so we prefer\n     it except on Windows.*/\n  return fseeko((FILE *)_stream,(off_t)_offset,_whence);\n#endif\n}\n\nstatic opus_int64 op_ftell(void *_stream){\n#if defined(_WIN32)\n  /*_ftelli64() is not exposed until MSCVCRT80, and ftello()/ftello64() have\n     the same problems as fseeko()/fseeko64() in MingW.\n    See above for a more detailed explanation.*/\n  opus_int64 pos;\n  OP_ASSERT(sizeof(pos)==sizeof(fpos_t));\n  return fgetpos((FILE *)_stream,(fpos_t *)&pos)?-1:pos;\n#else\n  /*This function actually conforms to the SUSv2 and POSIX.1-2001, so we prefer\n     it except on Windows.*/\n  return ftello((FILE *)_stream);\n#endif\n}\n\nstatic const OpusFileCallbacks OP_FILE_CALLBACKS={\n  op_fread,\n  op_fseek,\n  op_ftell,\n  (op_close_func)fclose\n};\n\n#if defined(_WIN32)\n# include <stddef.h>\n# include <errno.h>\n\n/*Windows doesn't accept UTF-8 by default, and we don't have a wchar_t API,\n   so if we just pass the path to fopen(), then there'd be no way for a user\n   of our API to open a Unicode filename.\n  Instead, we translate from UTF-8 to UTF-16 and use Windows' wchar_t API.\n  This makes this API more consistent with platforms where the character set\n   used by fopen is the same as used on disk, which is generally UTF-8, and\n   with our metadata API, which always uses UTF-8.*/\nstatic wchar_t *op_utf8_to_utf16(const char *_src){\n  wchar_t *dst;\n  size_t   len;\n  len=strlen(_src);\n  /*Worst-case output is 1 wide character per 1 input character.*/\n  dst=(wchar_t *)_ogg_malloc(sizeof(*dst)*(len+1));\n  if(dst!=NULL){\n    size_t si;\n    size_t di;\n    for(di=si=0;si<len;si++){\n      int c0;\n      c0=(unsigned char)_src[si];\n      if(!(c0&0x80)){\n        /*Start byte says this is a 1-byte sequence.*/\n        dst[di++]=(wchar_t)c0;\n        continue;\n      }\n      else{\n        int c1;\n        /*This is safe, because c0 was not 0 and _src is NUL-terminated.*/\n        c1=(unsigned char)_src[si+1];\n        if((c1&0xC0)==0x80){\n          /*Found at least one continuation byte.*/\n          if((c0&0xE0)==0xC0){\n            wchar_t w;\n            /*Start byte says this is a 2-byte sequence.*/\n            w=(c0&0x1F)<<6|c1&0x3F;\n            if(w>=0x80U){\n              /*This is a 2-byte sequence that is not overlong.*/\n              dst[di++]=w;\n              si++;\n              continue;\n            }\n          }\n          else{\n            int c2;\n            /*This is safe, because c1 was not 0 and _src is NUL-terminated.*/\n            c2=(unsigned char)_src[si+2];\n            if((c2&0xC0)==0x80){\n              /*Found at least two continuation bytes.*/\n              if((c0&0xF0)==0xE0){\n                wchar_t w;\n                /*Start byte says this is a 3-byte sequence.*/\n                w=(c0&0xF)<<12|(c1&0x3F)<<6|c2&0x3F;\n                if(w>=0x800U&&(w<0xD800||w>=0xE000)&&w<0xFFFE){\n                  /*This is a 3-byte sequence that is not overlong, not a\n                     UTF-16 surrogate pair value, and not a 'not a character'\n                     value.*/\n                  dst[di++]=w;\n                  si+=2;\n                  continue;\n                }\n              }\n              else{\n                int c3;\n                /*This is safe, because c2 was not 0 and _src is\n                   NUL-terminated.*/\n                c3=(unsigned char)_src[si+3];\n                if((c3&0xC0)==0x80){\n                  /*Found at least three continuation bytes.*/\n                  if((c0&0xF8)==0xF0){\n                    opus_uint32 w;\n                    /*Start byte says this is a 4-byte sequence.*/\n                    w=(c0&7)<<18|(c1&0x3F)<<12|(c2&0x3F)<<6&(c3&0x3F);\n                    if(w>=0x10000U&&w<0x110000U){\n                      /*This is a 4-byte sequence that is not overlong and not\n                         greater than the largest valid Unicode code point.\n                        Convert it to a surrogate pair.*/\n                      w-=0x10000;\n                      dst[di++]=(wchar_t)(0xD800+(w>>10));\n                      dst[di++]=(wchar_t)(0xDC00+(w&0x3FF));\n                      si+=3;\n                      continue;\n                    }\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n      /*If we got here, we encountered an illegal UTF-8 sequence.*/\n      _ogg_free(dst);\n      return NULL;\n    }\n    OP_ASSERT(di<=len);\n    dst[di]='\\0';\n  }\n  return dst;\n}\n\n#endif\n\nvoid *op_fopen(OpusFileCallbacks *_cb,const char *_path,const char *_mode){\n  FILE *fp;\n#if !defined(_WIN32)\n  fp=fopen(_path,_mode);\n#else\n  fp=NULL;\n  {\n    wchar_t *wpath;\n    wchar_t *wmode;\n    wpath=op_utf8_to_utf16(_path);\n    wmode=op_utf8_to_utf16(_mode);\n    if(wmode==NULL)errno=EINVAL;\n    else if(wpath==NULL)errno=ENOENT;\n    else fp=_wfopen(wpath,wmode);\n    _ogg_free(wmode);\n    _ogg_free(wpath);\n  }\n#endif\n  if(fp!=NULL)*_cb=*&OP_FILE_CALLBACKS;\n  return fp;\n}\n\nvoid *op_fdopen(OpusFileCallbacks *_cb,int _fd,const char *_mode){\n  FILE *fp;\n  fp=fdopen(_fd,_mode);\n  if(fp!=NULL)*_cb=*&OP_FILE_CALLBACKS;\n  return fp;\n}\n\nvoid *op_freopen(OpusFileCallbacks *_cb,const char *_path,const char *_mode,\n void *_stream){\n  FILE *fp;\n#if !defined(_WIN32)\n  fp=freopen(_path,_mode,(FILE *)_stream);\n#else\n  fp=NULL;\n  {\n    wchar_t *wpath;\n    wchar_t *wmode;\n    wpath=op_utf8_to_utf16(_path);\n    wmode=op_utf8_to_utf16(_mode);\n    if(wmode==NULL)errno=EINVAL;\n    else if(wpath==NULL)errno=ENOENT;\n    else fp=_wfreopen(wpath,wmode,(FILE *)_stream);\n    _ogg_free(wmode);\n    _ogg_free(wpath);\n  }\n#endif\n  if(fp!=NULL)*_cb=*&OP_FILE_CALLBACKS;\n  return fp;\n}\n\nstatic int op_mem_read(void *_stream,unsigned char *_ptr,int _buf_size){\n  OpusMemStream *stream;\n  ptrdiff_t      size;\n  ptrdiff_t      pos;\n  stream=(OpusMemStream *)_stream;\n  /*Check for empty read.*/\n  if(_buf_size<=0)return 0;\n  size=stream->size;\n  pos=stream->pos;\n  /*Check for EOF.*/\n  if(pos>=size)return 0;\n  /*Check for a short read.*/\n  _buf_size=(int)OP_MIN(size-pos,_buf_size);\n  memcpy(_ptr,stream->data+pos,_buf_size);\n  pos+=_buf_size;\n  stream->pos=pos;\n  return _buf_size;\n}\n\nstatic int op_mem_seek(void *_stream,opus_int64 _offset,int _whence){\n  OpusMemStream *stream;\n  ptrdiff_t      pos;\n  stream=(OpusMemStream *)_stream;\n  pos=stream->pos;\n  OP_ASSERT(pos>=0);\n  switch(_whence){\n    case SEEK_SET:{\n      /*Check for overflow:*/\n      if(_offset<0||_offset>OP_MEM_DIFF_MAX)return -1;\n      pos=(ptrdiff_t)_offset;\n    }break;\n    case SEEK_CUR:{\n      /*Check for overflow:*/\n      if(_offset<-pos||_offset>OP_MEM_DIFF_MAX-pos)return -1;\n      pos=(ptrdiff_t)(pos+_offset);\n    }break;\n    case SEEK_END:{\n      ptrdiff_t size;\n      size=stream->size;\n      OP_ASSERT(size>=0);\n      /*Check for overflow:*/\n      if(_offset>size||_offset<size-OP_MEM_DIFF_MAX)return -1;\n      pos=(ptrdiff_t)(size-_offset);\n    }break;\n    default:return -1;\n  }\n  stream->pos=pos;\n  return 0;\n}\n\nstatic opus_int64 op_mem_tell(void *_stream){\n  OpusMemStream *stream;\n  stream=(OpusMemStream *)_stream;\n  return (ogg_int64_t)stream->pos;\n}\n\nstatic int op_mem_close(void *_stream){\n  _ogg_free(_stream);\n  return 0;\n}\n\nstatic const OpusFileCallbacks OP_MEM_CALLBACKS={\n  op_mem_read,\n  op_mem_seek,\n  op_mem_tell,\n  op_mem_close\n};\n\nvoid *op_mem_stream_create(OpusFileCallbacks *_cb,\n const unsigned char *_data,size_t _size){\n  OpusMemStream *stream;\n  if(_size>OP_MEM_SIZE_MAX)return NULL;\n  stream=(OpusMemStream *)_ogg_malloc(sizeof(*stream));\n  if(stream!=NULL){\n    *_cb=*&OP_MEM_CALLBACKS;\n    stream->data=_data;\n    stream->size=_size;\n    stream->pos=0;\n  }\n  return stream;\n}\n"
  },
  {
    "path": "ThirdParty/opusfile-0.9/src/wincerts.c",
    "content": "/********************************************************************\n *                                                                  *\n * THIS FILE IS PART OF THE libopusfile SOFTWARE CODEC SOURCE CODE. *\n * USE, DISTRIBUTION AND REPRODUCTION OF THIS LIBRARY SOURCE IS     *\n * GOVERNED BY A BSD-STYLE SOURCE LICENSE INCLUDED WITH THIS SOURCE *\n * IN 'COPYING'. PLEASE READ THESE TERMS BEFORE DISTRIBUTING.       *\n *                                                                  *\n * THE libopusfile SOURCE CODE IS (C) COPYRIGHT 2013                *\n * by the Xiph.Org Foundation and contributors http://www.xiph.org/ *\n *                                                                  *\n ********************************************************************/\n\n/*This should really be part of OpenSSL, but there's been a patch [1] sitting\n   in their bugtracker for over two years that implements this, without any\n   action, so I'm giving up and re-implementing it locally.\n\n  [1] <http://rt.openssl.org/Ticket/Display.html?id=2158>*/\n\n#ifdef HAVE_CONFIG_H\n#include \"config.h\"\n#endif\n\n#include \"internal.h\"\n#if defined(OP_ENABLE_HTTP)&&defined(_WIN32)\n/*You must include windows.h before wincrypt.h and x509.h.*/\n# define WIN32_LEAN_AND_MEAN\n# define WIN32_EXTRA_LEAN\n# include <windows.h>\n/*You must include wincrypt.h before x509.h, too, or X509_NAME doesn't get\n   defined properly.*/\n# include <wincrypt.h>\n# include <openssl/ssl.h>\n# include <openssl/err.h>\n# include <openssl/x509.h>\n\nstatic int op_capi_new(X509_LOOKUP *_lu){\n  HCERTSTORE h_store;\n  h_store=CertOpenStore(CERT_STORE_PROV_SYSTEM_A,0,0,\n   CERT_STORE_OPEN_EXISTING_FLAG|CERT_STORE_READONLY_FLAG|\n   CERT_SYSTEM_STORE_CURRENT_USER|CERT_STORE_SHARE_CONTEXT_FLAG,\"ROOT\");\n  if(h_store!=NULL){\n    _lu->method_data=(char *)h_store;\n    return 1;\n  }\n  return 0;\n}\n\nstatic void op_capi_free(X509_LOOKUP *_lu){\n  HCERTSTORE h_store;\n  h_store=(HCERTSTORE)_lu->method_data;\n# if defined(OP_ENABLE_ASSERTIONS)\n  OP_ALWAYS_TRUE(CertCloseStore(h_store,CERT_CLOSE_STORE_CHECK_FLAG));\n# else\n  CertCloseStore(h_store,0);\n# endif\n}\n\nstatic int op_capi_retrieve_by_subject(X509_LOOKUP *_lu,int _type,\n X509_NAME *_name,X509_OBJECT *_ret){\n  X509_OBJECT *obj;\n  CRYPTO_w_lock(CRYPTO_LOCK_X509_STORE);\n  obj=X509_OBJECT_retrieve_by_subject(_lu->store_ctx->objs,_type,_name);\n  CRYPTO_w_unlock(CRYPTO_LOCK_X509_STORE);\n  if(obj!=NULL){\n    _ret->type=obj->type;\n    memcpy(&_ret->data,&obj->data,sizeof(_ret->data));\n    return 1;\n  }\n  return 0;\n}\n\nstatic int op_capi_get_by_subject(X509_LOOKUP *_lu,int _type,X509_NAME *_name,\n X509_OBJECT *_ret){\n  HCERTSTORE h_store;\n  if(_name==NULL)return 0;\n  if(_name->bytes==NULL||_name->bytes->length<=0||_name->modified){\n    if(i2d_X509_NAME(_name,NULL)<0)return 0;\n    OP_ASSERT(_name->bytes->length>0);\n  }\n  h_store=(HCERTSTORE)_lu->method_data;\n  switch(_type){\n    case X509_LU_X509:{\n      CERT_NAME_BLOB  find_para;\n      PCCERT_CONTEXT  cert;\n      X509           *x;\n      int             ret;\n      /*Although X509_NAME contains a canon_enc field, that \"canonical\" [1]\n         encoding was just made up by OpenSSL.\n        It doesn't correspond to any actual standard, and since it drops the\n         initial sequence header, won't be recognized by the Crypto API.\n        The assumption here is that CertFindCertificateInStore() will allow any\n         appropriate variations in the encoding when it does its comparison.\n        This is, however, emphatically not true under Wine, which just compares\n         the encodings with memcmp().\n        Most of the time things work anyway, though, and there isn't really\n         anything we can do to make the situation better.\n\n        [1] A \"canonical form\" is defined as the one where, if you locked 10\n         mathematicians in a room and asked them to come up with a\n         representation for something, it's the answer that 9 of them would\n         give you back.\n        I don't think OpenSSL's encoding qualifies.*/\n      if(OP_UNLIKELY(_name->bytes->length>MAXDWORD))return 0;\n      find_para.cbData=(DWORD)_name->bytes->length;\n      find_para.pbData=(unsigned char *)_name->bytes->data;\n      cert=CertFindCertificateInStore(h_store,X509_ASN_ENCODING,0,\n       CERT_FIND_SUBJECT_NAME,&find_para,NULL);\n      if(cert==NULL)return 0;\n      x=d2i_X509(NULL,(const unsigned char **)&cert->pbCertEncoded,\n       cert->cbCertEncoded);\n      CertFreeCertificateContext(cert);\n      if(x==NULL)return 0;\n      ret=X509_STORE_add_cert(_lu->store_ctx,x);\n      X509_free(x);\n      if(ret)return op_capi_retrieve_by_subject(_lu,_type,_name,_ret);\n    }break;\n    case X509_LU_CRL:{\n      CERT_INFO      cert_info;\n      CERT_CONTEXT   find_para;\n      PCCRL_CONTEXT  crl;\n      X509_CRL      *x;\n      int            ret;\n      ret=op_capi_retrieve_by_subject(_lu,_type,_name,_ret);\n      if(ret>0)return ret;\n      memset(&cert_info,0,sizeof(cert_info));\n      if(OP_UNLIKELY(_name->bytes->length>MAXDWORD))return 0;\n      cert_info.Issuer.cbData=(DWORD)_name->bytes->length;\n      cert_info.Issuer.pbData=(unsigned char *)_name->bytes->data;\n      memset(&find_para,0,sizeof(find_para));\n      find_para.pCertInfo=&cert_info;\n      crl=CertFindCRLInStore(h_store,0,0,CRL_FIND_ISSUED_BY,&find_para,NULL);\n      if(crl==NULL)return 0;\n      x=d2i_X509_CRL(NULL,(const unsigned char **)&crl->pbCrlEncoded,\n       crl->cbCrlEncoded);\n      CertFreeCRLContext(crl);\n      if(x==NULL)return 0;\n      ret=X509_STORE_add_crl(_lu->store_ctx,x);\n      X509_CRL_free(x);\n      if(ret)return op_capi_retrieve_by_subject(_lu,_type,_name,_ret);\n    }break;\n  }\n  return 0;\n}\n\n/*This is not const because OpenSSL doesn't allow it, even though it won't\n   write to it.*/\nstatic X509_LOOKUP_METHOD X509_LOOKUP_CAPI={\n  \"Load Crypto API store into cache\",\n  op_capi_new,\n  op_capi_free,\n  NULL,\n  NULL,\n  NULL,\n  op_capi_get_by_subject,\n  NULL,\n  NULL,\n  NULL\n};\n\nint SSL_CTX_set_default_verify_paths_win32(SSL_CTX *_ssl_ctx){\n  X509_STORE  *store;\n  X509_LOOKUP *lu;\n  /*We intentionally do not add the normal default paths, as they are usually\n     wrong, and are just asking to be used as an exploit vector.*/\n  store=SSL_CTX_get_cert_store(_ssl_ctx);\n  OP_ASSERT(store!=NULL);\n  lu=X509_STORE_add_lookup(store,&X509_LOOKUP_CAPI);\n  if(lu==NULL)return 0;\n  ERR_clear_error();\n  return 1;\n}\n\n#endif\n"
  },
  {
    "path": "ThirdParty/opusfile-0.9/src/winerrno.h",
    "content": "/********************************************************************\n *                                                                  *\n * THIS FILE IS PART OF THE libopusfile SOFTWARE CODEC SOURCE CODE. *\n * USE, DISTRIBUTION AND REPRODUCTION OF THIS LIBRARY SOURCE IS     *\n * GOVERNED BY A BSD-STYLE SOURCE LICENSE INCLUDED WITH THIS SOURCE *\n * IN 'COPYING'. PLEASE READ THESE TERMS BEFORE DISTRIBUTING.       *\n *                                                                  *\n * THE libopusfile SOURCE CODE IS (C) COPYRIGHT 2012                *\n * by the Xiph.Org Foundation and contributors http://www.xiph.org/ *\n *                                                                  *\n ********************************************************************/\n#if !defined(_opusfile_winerrno_h)\n# define _opusfile_winerrno_h (1)\n\n# include <errno.h>\n# include <winerror.h>\n\n/*These conflict with the MSVC errno.h definitions, but we don't need to use\n   the original ones in any file that deals with sockets.\n  We could map the WSA errors to the errno.h ones (most of which are only\n   available on sufficiently new versions of MSVC), but they aren't ordered the\n   same, and given how rarely we actually look at the values, I don't think\n   it's worth a lookup table.*/\n# undef EWOULDBLOCK\n# undef EINPROGRESS\n# undef EALREADY\n# undef ENOTSOCK\n# undef EDESTADDRREQ\n# undef EMSGSIZE\n# undef EPROTOTYPE\n# undef ENOPROTOOPT\n# undef EPROTONOSUPPORT\n# undef EOPNOTSUPP\n# undef EAFNOSUPPORT\n# undef EADDRINUSE\n# undef EADDRNOTAVAIL\n# undef ENETDOWN\n# undef ENETUNREACH\n# undef ENETRESET\n# undef ECONNABORTED\n# undef ECONNRESET\n# undef ENOBUFS\n# undef EISCONN\n# undef ENOTCONN\n# undef ETIMEDOUT\n# undef ECONNREFUSED\n# undef ELOOP\n# undef ENAMETOOLONG\n# undef EHOSTUNREACH\n# undef ENOTEMPTY\n\n# define EWOULDBLOCK     (WSAEWOULDBLOCK-WSABASEERR)\n# define EINPROGRESS     (WSAEINPROGRESS-WSABASEERR)\n# define EALREADY        (WSAEALREADY-WSABASEERR)\n# define ENOTSOCK        (WSAENOTSOCK-WSABASEERR)\n# define EDESTADDRREQ    (WSAEDESTADDRREQ-WSABASEERR)\n# define EMSGSIZE        (WSAEMSGSIZE-WSABASEERR)\n# define EPROTOTYPE      (WSAEPROTOTYPE-WSABASEERR)\n# define ENOPROTOOPT     (WSAENOPROTOOPT-WSABASEERR)\n# define EPROTONOSUPPORT (WSAEPROTONOSUPPORT-WSABASEERR)\n# define ESOCKTNOSUPPORT (WSAESOCKTNOSUPPORT-WSABASEERR)\n# define EOPNOTSUPP      (WSAEOPNOTSUPP-WSABASEERR)\n# define EPFNOSUPPORT    (WSAEPFNOSUPPORT-WSABASEERR)\n# define EAFNOSUPPORT    (WSAEAFNOSUPPORT-WSABASEERR)\n# define EADDRINUSE      (WSAEADDRINUSE-WSABASEERR)\n# define EADDRNOTAVAIL   (WSAEADDRNOTAVAIL-WSABASEERR)\n# define ENETDOWN        (WSAENETDOWN-WSABASEERR)\n# define ENETUNREACH     (WSAENETUNREACH-WSABASEERR)\n# define ENETRESET       (WSAENETRESET-WSABASEERR)\n# define ECONNABORTED    (WSAECONNABORTED-WSABASEERR)\n# define ECONNRESET      (WSAECONNRESET-WSABASEERR)\n# define ENOBUFS         (WSAENOBUFS-WSABASEERR)\n# define EISCONN         (WSAEISCONN-WSABASEERR)\n# define ENOTCONN        (WSAENOTCONN-WSABASEERR)\n# define ESHUTDOWN       (WSAESHUTDOWN-WSABASEERR)\n# define ETOOMANYREFS    (WSAETOOMANYREFS-WSABASEERR)\n# define ETIMEDOUT       (WSAETIMEDOUT-WSABASEERR)\n# define ECONNREFUSED    (WSAECONNREFUSED-WSABASEERR)\n# define ELOOP           (WSAELOOP-WSABASEERR)\n# define ENAMETOOLONG    (WSAENAMETOOLONG-WSABASEERR)\n# define EHOSTDOWN       (WSAEHOSTDOWN-WSABASEERR)\n# define EHOSTUNREACH    (WSAEHOSTUNREACH-WSABASEERR)\n# define ENOTEMPTY       (WSAENOTEMPTY-WSABASEERR)\n# define EPROCLIM        (WSAEPROCLIM-WSABASEERR)\n# define EUSERS          (WSAEUSERS-WSABASEERR)\n# define EDQUOT          (WSAEDQUOT-WSABASEERR)\n# define ESTALE          (WSAESTALE-WSABASEERR)\n# define EREMOTE         (WSAEREMOTE-WSABASEERR)\n\n#endif\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite/include/ScreenCapture.h",
    "content": "#pragma once\n#include <assert.h>\n#include <chrono>\n#include <cstring>\n#include <functional>\n#include <memory>\n#include <string>\n#include <thread>\n#include <vector>\n\n#if defined(WINDOWS) || defined(WIN32)\n#if defined(SC_LITE_DLL)\n#define SC_LITE_EXTERN __declspec(dllexport)\n#else\n#define SC_LITE_EXTERN\n#endif\n#else\n#define SC_LITE_EXTERN\n#endif\n\nnamespace SL {\nnamespace Screen_Capture {\n    struct SC_LITE_EXTERN Point {\n        int x;\n        int y;\n    };\n    struct SC_LITE_EXTERN Window {\n        size_t Handle;\n        Point Position;\n\n        Point Size;\n        // Name will always be lower case. It is converted to lower case internally by the library for comparisons\n        char Name[128] = {0};\n    };\n    struct SC_LITE_EXTERN Monitor {\n        int Id = INT32_MAX;\n        int Index = INT32_MAX;\n        int Adapter = INT32_MAX;\n        int Height = 0;\n        int Width = 0;\n        int OriginalHeight = 0;\n        int OriginalWidth = 0;\n        // Offsets are the number of pixels that a monitor can be from the origin. For example, users can shuffle their\n        // monitors around so this affects their offset.\n        int OffsetX = 0;\n        int OffsetY = 0;\n        int OriginalOffsetX = 0;\n        int OriginalOffsetY = 0;\n        char Name[128] = {0};\n        float Scaling = 1.0f;\n    };\n\n    struct Image;\n    struct ImageBGRA {\n        unsigned char B, G, R, A;\n    };\n\n    // index to self in the GetMonitors() function\n    SC_LITE_EXTERN int Index(const Monitor &mointor);\n    // unique identifier\n    SC_LITE_EXTERN int Id(const Monitor &mointor);\n    SC_LITE_EXTERN int Adapter(const Monitor &mointor);\n    SC_LITE_EXTERN int OffsetX(const Monitor &mointor);\n    SC_LITE_EXTERN int OffsetY(const Monitor &mointor);\n    SC_LITE_EXTERN void OffsetX(Monitor &mointor, int x);\n    SC_LITE_EXTERN void OffsetY(Monitor &mointor, int y);\n    SC_LITE_EXTERN int OffsetX(const Window &mointor);\n    SC_LITE_EXTERN int OffsetY(const Window &mointor);\n    SC_LITE_EXTERN void OffsetX(Window &mointor, int x);\n    SC_LITE_EXTERN void OffsetY(Window &mointor, int y);\n    SC_LITE_EXTERN const char *Name(const Monitor &mointor);\n    SC_LITE_EXTERN const char *Name(const Window &mointor);\n    SC_LITE_EXTERN int Height(const Monitor &mointor);\n    SC_LITE_EXTERN int Width(const Monitor &mointor);\n    SC_LITE_EXTERN void Height(Monitor &mointor, int h);\n    SC_LITE_EXTERN void Width(Monitor &mointor, int w);\n    SC_LITE_EXTERN int Height(const Window &mointor);\n    SC_LITE_EXTERN int Width(const Window &mointor);\n    SC_LITE_EXTERN void Height(Window &mointor, int h);\n    SC_LITE_EXTERN void Width(Window &mointor, int w);\n    SC_LITE_EXTERN int Height(const Image &img);\n    SC_LITE_EXTERN int Width(const Image &img);\n    SC_LITE_EXTERN int X(const Point &p);\n    SC_LITE_EXTERN int Y(const Point &p);\n\n    // the start of the image data, this is not guarenteed to be contiguous.\n    SC_LITE_EXTERN const ImageBGRA *StartSrc(const Image &img);\n    SC_LITE_EXTERN const ImageBGRA *GotoNextRow(const Image &img, const ImageBGRA *current);\n    SC_LITE_EXTERN bool isDataContiguous(const Image &img);\n    /*\n        this is the ONLY funcion for pulling data out of the Image object and is layed out here in the header so that\n        users can see how to extra data and convert it to their own needed format. Initially, I included custom extract functions\n        but this is beyond the scope of this library. You must copy the image data if you want to use it as the library owns the Image Data.\n    */\n    inline void Extract(const Image &img, unsigned char *dst, size_t dst_size)\n    {\n        assert(dst_size >= static_cast<size_t>(Width(img) * Height(img) * sizeof(ImageBGRA)));\n        auto startdst = dst;\n        auto startsrc = StartSrc(img);\n        if (isDataContiguous(img)) { // no padding, the entire copy can be a single memcpy call\n            memcpy(startdst, startsrc, Width(img) * Height(img) * sizeof(ImageBGRA));\n        }\n        else {\n            for (auto i = 0; i < Height(img); i++) {\n                memcpy(startdst, startsrc, sizeof(ImageBGRA) * Width(img));\n                startdst += sizeof(ImageBGRA) * Width(img); // advance to the next row\n                startsrc = GotoNextRow(img, startsrc);      // advance to the next row\n            }\n        }\n    }\n\n    class Timer {\n        using Clock =\n            std::conditional<std::chrono::high_resolution_clock::is_steady, std::chrono::high_resolution_clock, std::chrono::steady_clock>::type;\n\n        std::chrono::microseconds Duration;\n        Clock::time_point Deadline;\n\n      public:\n        template <typename Rep, typename Period>\n        Timer(const std::chrono::duration<Rep, Period> &duration)\n            : Duration(std::chrono::duration_cast<std::chrono::microseconds>(duration)), Deadline(Clock::now() + Duration)\n        {\n        }\n        void start() { Deadline = Clock::now() + Duration; }\n        void wait()\n        {\n            const auto now = Clock::now();\n            if (now < Deadline) {\n                std::this_thread::sleep_for(Deadline - now);\n            }\n        }\n        std::chrono::microseconds duration() const { return Duration; }\n    };\n    // will return all attached monitors\n    SC_LITE_EXTERN std::vector<Monitor> GetMonitors();\n    // will return all windows\n    SC_LITE_EXTERN std::vector<Window> GetWindows();\n\n    typedef std::function<void(const SL::Screen_Capture::Image &img, const Window &window)> WindowCaptureCallback;\n    typedef std::function<void(const SL::Screen_Capture::Image &img, const Monitor &monitor)> ScreenCaptureCallback;\n    typedef std::function<void(const SL::Screen_Capture::Image *img, const Point &point)> MouseCallback;\n    typedef std::function<std::vector<Monitor>()> MonitorCallback;\n    typedef std::function<std::vector<Window>()> WindowCallback;\n\n    class SC_LITE_EXTERN IScreenCaptureManager {\n      public:\n        virtual ~IScreenCaptureManager() {}\n\n        // Used by the library to determine the callback frequency\n        template <class Rep, class Period> void setFrameChangeInterval(const std::chrono::duration<Rep, Period> &rel_time)\n        {\n            setFrameChangeInterval(std::make_shared<Timer>(rel_time));\n        }\n        // Used by the library to determine the callback frequency\n        template <class Rep, class Period> void setMouseChangeInterval(const std::chrono::duration<Rep, Period> &rel_time)\n        {\n            setMouseChangeInterval(std::make_shared<Timer>(rel_time));\n        }\n\n        virtual void setFrameChangeInterval(const std::shared_ptr<Timer> &timer) = 0;\n        virtual void setMouseChangeInterval(const std::shared_ptr<Timer> &timer) = 0;\n\n        // Will pause all capturing\n        virtual void pause() = 0;\n        // Will return whether the library is paused\n        virtual bool isPaused() const = 0;\n        // Will resume all capturing if paused, otherwise has no effect\n        virtual void resume() = 0;\n    };\n\n    template <typename CAPTURECALLBACK> class ICaptureConfiguration {\n      public:\n        virtual ~ICaptureConfiguration() {}\n        // When a new frame is available the callback is invoked\n        virtual std::shared_ptr<ICaptureConfiguration<CAPTURECALLBACK>> onNewFrame(const CAPTURECALLBACK &cb) = 0;\n        // When a change in a frame is detected, the callback is invoked\n        virtual std::shared_ptr<ICaptureConfiguration<CAPTURECALLBACK>> onFrameChanged(const CAPTURECALLBACK &cb) = 0;\n        // When a mouse image changes or the mouse changes position, the callback is invoked.\n        virtual std::shared_ptr<ICaptureConfiguration<CAPTURECALLBACK>> onMouseChanged(const MouseCallback &cb) = 0;\n        // start capturing\n        virtual std::shared_ptr<IScreenCaptureManager> start_capturing() = 0;\n    };\n\n    // the callback of windowstocapture represents the list of monitors which should be captured. Users should return the list of monitors they want\n    // to be captured\n    SC_LITE_EXTERN std::shared_ptr<ICaptureConfiguration<ScreenCaptureCallback>> CreateCaptureConfiguration(const MonitorCallback &monitorstocapture);\n    // the callback of windowstocapture represents the list of windows which should be captured. Users should return the list of windows they want to\n    // be captured\n    SC_LITE_EXTERN std::shared_ptr<ICaptureConfiguration<WindowCaptureCallback>> CreateCaptureConfiguration(const WindowCallback &windowstocapture);\n} // namespace Screen_Capture\n} // namespace SL\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2017 Scott\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/ScreenCapture.h",
    "content": "#pragma once\n#include <assert.h>\n#include <chrono>\n#include <cstring>\n#include <functional>\n#include <memory>\n#include <string>\n#include <thread>\n#include <vector>\n\n#if defined(WINDOWS) || defined(WIN32)\n#if defined(SC_LITE_DLL)\n#define SC_LITE_EXTERN __declspec(dllexport)\n#else\n#define SC_LITE_EXTERN\n#endif\n#else\n#define SC_LITE_EXTERN\n#endif\n\nnamespace SL {\nnamespace Screen_Capture {\n    struct SC_LITE_EXTERN Point {\n        int x;\n        int y;\n    };\n    struct SC_LITE_EXTERN Window {\n        size_t Handle;\n        Point Position;\n\n        Point Size;\n        // Name will always be lower case. It is converted to lower case internally by the library for comparisons\n        char Name[128] = {0};\n    };\n    struct SC_LITE_EXTERN Monitor {\n        int Id = INT32_MAX;\n        int Index = INT32_MAX;\n        int Adapter = INT32_MAX;\n        int Height = 0;\n        int Width = 0;\n        int OriginalHeight = 0;\n        int OriginalWidth = 0;\n        // Offsets are the number of pixels that a monitor can be from the origin. For example, users can shuffle their\n        // monitors around so this affects their offset.\n        int OffsetX = 0;\n        int OffsetY = 0;\n        int OriginalOffsetX = 0;\n        int OriginalOffsetY = 0;\n        char Name[128] = {0};\n        float Scaling = 1.0f;\n    };\n\n    struct Image;\n    struct ImageBGRA {\n        unsigned char B, G, R, A;\n    };\n\n    // index to self in the GetMonitors() function\n    SC_LITE_EXTERN int Index(const Monitor &mointor);\n    // unique identifier\n    SC_LITE_EXTERN int Id(const Monitor &mointor);\n    SC_LITE_EXTERN int Adapter(const Monitor &mointor);\n    SC_LITE_EXTERN int OffsetX(const Monitor &mointor);\n    SC_LITE_EXTERN int OffsetY(const Monitor &mointor);\n    SC_LITE_EXTERN void OffsetX(Monitor &mointor, int x);\n    SC_LITE_EXTERN void OffsetY(Monitor &mointor, int y);\n    SC_LITE_EXTERN int OffsetX(const Window &mointor);\n    SC_LITE_EXTERN int OffsetY(const Window &mointor);\n    SC_LITE_EXTERN void OffsetX(Window &mointor, int x);\n    SC_LITE_EXTERN void OffsetY(Window &mointor, int y);\n    SC_LITE_EXTERN const char *Name(const Monitor &mointor);\n    SC_LITE_EXTERN const char *Name(const Window &mointor);\n    SC_LITE_EXTERN int Height(const Monitor &mointor);\n    SC_LITE_EXTERN int Width(const Monitor &mointor);\n    SC_LITE_EXTERN void Height(Monitor &mointor, int h);\n    SC_LITE_EXTERN void Width(Monitor &mointor, int w);\n    SC_LITE_EXTERN int Height(const Window &mointor);\n    SC_LITE_EXTERN int Width(const Window &mointor);\n    SC_LITE_EXTERN void Height(Window &mointor, int h);\n    SC_LITE_EXTERN void Width(Window &mointor, int w);\n    SC_LITE_EXTERN int Height(const Image &img);\n    SC_LITE_EXTERN int Width(const Image &img);\n    SC_LITE_EXTERN int X(const Point &p);\n    SC_LITE_EXTERN int Y(const Point &p);\n\n    // the start of the image data, this is not guarenteed to be contiguous.\n    SC_LITE_EXTERN const ImageBGRA *StartSrc(const Image &img);\n    SC_LITE_EXTERN const ImageBGRA *GotoNextRow(const Image &img, const ImageBGRA *current);\n    SC_LITE_EXTERN bool isDataContiguous(const Image &img);\n    /*\n        this is the ONLY funcion for pulling data out of the Image object and is layed out here in the header so that\n        users can see how to extra data and convert it to their own needed format. Initially, I included custom extract functions\n        but this is beyond the scope of this library. You must copy the image data if you want to use it as the library owns the Image Data.\n    */\n    inline void Extract(const Image &img, unsigned char *dst, size_t dst_size)\n    {\n        assert(dst_size >= static_cast<size_t>(Width(img) * Height(img) * sizeof(ImageBGRA)));\n        auto startdst = dst;\n        auto startsrc = StartSrc(img);\n        if (isDataContiguous(img)) { // no padding, the entire copy can be a single memcpy call\n            memcpy(startdst, startsrc, Width(img) * Height(img) * sizeof(ImageBGRA));\n        }\n        else {\n            for (auto i = 0; i < Height(img); i++) {\n                memcpy(startdst, startsrc, sizeof(ImageBGRA) * Width(img));\n                startdst += sizeof(ImageBGRA) * Width(img); // advance to the next row\n                startsrc = GotoNextRow(img, startsrc);      // advance to the next row\n            }\n        }\n    }\n\n    class Timer {\n        using Clock =\n            std::conditional<std::chrono::high_resolution_clock::is_steady, std::chrono::high_resolution_clock, std::chrono::steady_clock>::type;\n\n        std::chrono::microseconds Duration;\n        Clock::time_point Deadline;\n\n      public:\n        template <typename Rep, typename Period>\n        Timer(const std::chrono::duration<Rep, Period> &duration)\n            : Duration(std::chrono::duration_cast<std::chrono::microseconds>(duration)), Deadline(Clock::now() + Duration)\n        {\n        }\n        void start() { Deadline = Clock::now() + Duration; }\n        void wait()\n        {\n            const auto now = Clock::now();\n            if (now < Deadline) {\n                std::this_thread::sleep_for(Deadline - now);\n            }\n        }\n        std::chrono::microseconds duration() const { return Duration; }\n    };\n    // will return all attached monitors\n    SC_LITE_EXTERN std::vector<Monitor> GetMonitors();\n    // will return all windows\n    SC_LITE_EXTERN std::vector<Window> GetWindows();\n\n    typedef std::function<void(const SL::Screen_Capture::Image &img, const Window &window)> WindowCaptureCallback;\n    typedef std::function<void(const SL::Screen_Capture::Image &img, const Monitor &monitor)> ScreenCaptureCallback;\n    typedef std::function<void(const SL::Screen_Capture::Image *img, const Point &point)> MouseCallback;\n    typedef std::function<std::vector<Monitor>()> MonitorCallback;\n    typedef std::function<std::vector<Window>()> WindowCallback;\n\n    class SC_LITE_EXTERN IScreenCaptureManager {\n      public:\n        virtual ~IScreenCaptureManager() {}\n\n        // Used by the library to determine the callback frequency\n        template <class Rep, class Period> void setFrameChangeInterval(const std::chrono::duration<Rep, Period> &rel_time)\n        {\n            setFrameChangeInterval(std::make_shared<Timer>(rel_time));\n        }\n        // Used by the library to determine the callback frequency\n        template <class Rep, class Period> void setMouseChangeInterval(const std::chrono::duration<Rep, Period> &rel_time)\n        {\n            setMouseChangeInterval(std::make_shared<Timer>(rel_time));\n        }\n\n        virtual void setFrameChangeInterval(const std::shared_ptr<Timer> &timer) = 0;\n        virtual void setMouseChangeInterval(const std::shared_ptr<Timer> &timer) = 0;\n\n        // Will pause all capturing\n        virtual void pause() = 0;\n        // Will return whether the library is paused\n        virtual bool isPaused() const = 0;\n        // Will resume all capturing if paused, otherwise has no effect\n        virtual void resume() = 0;\n    };\n\n    template <typename CAPTURECALLBACK> class ICaptureConfiguration {\n      public:\n        virtual ~ICaptureConfiguration() {}\n        // When a new frame is available the callback is invoked\n        virtual std::shared_ptr<ICaptureConfiguration<CAPTURECALLBACK>> onNewFrame(const CAPTURECALLBACK &cb) = 0;\n        // When a change in a frame is detected, the callback is invoked\n        virtual std::shared_ptr<ICaptureConfiguration<CAPTURECALLBACK>> onFrameChanged(const CAPTURECALLBACK &cb) = 0;\n        // When a mouse image changes or the mouse changes position, the callback is invoked.\n        virtual std::shared_ptr<ICaptureConfiguration<CAPTURECALLBACK>> onMouseChanged(const MouseCallback &cb) = 0;\n        // start capturing\n        virtual std::shared_ptr<IScreenCaptureManager> start_capturing() = 0;\n    };\n\n    // the callback of windowstocapture represents the list of monitors which should be captured. Users should return the list of monitors they want\n    // to be captured\n    SC_LITE_EXTERN std::shared_ptr<ICaptureConfiguration<ScreenCaptureCallback>> CreateCaptureConfiguration(const MonitorCallback &monitorstocapture);\n    // the callback of windowstocapture represents the list of windows which should be captured. Users should return the list of windows they want to\n    // be captured\n    SC_LITE_EXTERN std::shared_ptr<ICaptureConfiguration<WindowCaptureCallback>> CreateCaptureConfiguration(const WindowCallback &windowstocapture);\n} // namespace Screen_Capture\n} // namespace SL\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/include/SCCommon.h",
    "content": "#pragma once\n#include \"ScreenCapture.h\"\n#include <atomic>\n#include <thread>\n\n// this is INTERNAL DO NOT USE!\nnamespace SL {\nnamespace Screen_Capture {\n\n    template <typename F, typename M, typename W> struct CaptureData {\n        std::shared_ptr<ITimer> FrameTimer;\n        F OnNewFrame;\n        F OnFrameChanged;\n        std::shared_ptr<ITimer> MouseTimer;\n        M OnMouseChanged;\n        W getThingsToWatch;\n    };\n    struct CommonData {\n        // Used to indicate abnormal error condition\n        std::atomic<bool> UnexpectedErrorEvent;\n        // Used to indicate a transition event occurred e.g. PnpStop, PnpStart, mode change, TDR, desktop switch and the application needs to recreate\n        // the duplication interface\n        std::atomic<bool> ExpectedErrorEvent;\n        // Used to signal to threads to exit\n        std::atomic<bool> TerminateThreadsEvent;\n        std::atomic<bool> Paused;\n    };\n    struct Thread_Data {\n\n        CaptureData<ScreenCaptureCallback, MouseCallback, MonitorCallback> ScreenCaptureData;\n        CaptureData<WindowCaptureCallback, MouseCallback, WindowCallback> WindowCaptureData;\n        CommonData CommonData_;\n    };\n\n    class BaseFrameProcessor {\n      public:\n        std::shared_ptr<Thread_Data> Data;\n        std::unique_ptr<unsigned char[]> OldImageBuffer, NewImageBuffer;\n        int ImageBufferSize = 0;\n        bool FirstRun = true;\n    };\n\n    enum DUPL_RETURN { DUPL_RETURN_SUCCESS = 0, DUPL_RETURN_ERROR_EXPECTED = 1, DUPL_RETURN_ERROR_UNEXPECTED = 2 };\n    const int PixelStride = 4;\n    Monitor CreateMonitor(int index, int id, int h, int w, int ox, int oy, const std::string &n, float scale);\n\n    Image Create(const ImageRect &imgrect, int pixelstride, int rowpadding, const unsigned char *data);\n    // this function will copy data from the src into the dst. The only requirement is that src must not be larger than dst, but it can be smaller\n    // void Copy(const Image& dst, const Image& src);\n\n    std::vector<ImageRect> GetDifs(const Image &oldimg, const Image &newimg);\n\n    template <class F, class T, class C> void ProcessCapture(const F &data, T &base, const C &mointor, ImageRect &imageract)\n    {\n        if (data.OnNewFrame) {\n            auto wholeimg = Create(imageract, PixelStride, 0, base.NewImageBuffer.get());\n            data.OnNewFrame(wholeimg, mointor);\n        }\n        if (data.OnFrameChanged) {\n            if (base.FirstRun) {\n                // first time through, just send the whole image\n                auto wholeimgfirst = Create(imageract, PixelStride, 0, base.NewImageBuffer.get());\n                data.OnFrameChanged(wholeimgfirst, mointor);\n                base.FirstRun = false;\n            }\n            else {\n                // user wants difs, lets do it!\n                auto newimg = Create(imageract, PixelStride, 0, base.NewImageBuffer.get());\n                auto oldimg = Create(imageract, PixelStride, 0, base.OldImageBuffer.get());\n                auto imgdifs = GetDifs(oldimg, newimg);\n\n                for (auto &r : imgdifs) {\n                    auto padding = (r.left * PixelStride) + ((Width(newimg) - r.right) * PixelStride);\n                    auto startsrc = base.NewImageBuffer.get();\n                    startsrc += (r.left * PixelStride) + (r.top * PixelStride * Width(newimg));\n\n                    auto difimg = Create(r, PixelStride, padding, startsrc);\n                    data.OnFrameChanged(difimg, mointor);\n                }\n            }\n            std::swap(base.NewImageBuffer, base.OldImageBuffer);\n        }\n    }\n\n} // namespace Screen_Capture\n} // namespace SL\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/include/ScreenCapture.h",
    "content": "#pragma once\n#include <assert.h>\n#include <chrono>\n#include <cstring>\n#include <functional>\n#include <memory>\n#include <string>\n#include <thread>\n#include <vector>\n\n#if defined(WINDOWS) || defined(WIN32)\n#if defined(SC_LITE_DLL)\n#define SC_LITE_EXTERN __declspec(dllexport)\n#else\n#define SC_LITE_EXTERN\n#endif\n#else\n#define SC_LITE_EXTERN\n#endif\n\nnamespace SL {\nnamespace Screen_Capture {\n    struct Point {\n        int x;\n        int y;\n    };\n    struct Monitor {\n        int Id = INT32_MAX;\n        int Index = INT32_MAX;\n        int Height = 0;\n        int Width = 0;\n        // Offsets are the number of pixels that a monitor can be from the origin. For example, users can shuffle their\n        // monitors around so this affects their offset.\n        int OffsetX = 0;\n        int OffsetY = 0;\n        char Name[128] = {0};\n        float Scaling = 1.0f;\n    };\n\n    struct Window {\n        size_t Handle;\n        Point Position;\n        Point Size;\n        // Name will always be lower case. It is converted to lower case internally by the library for comparisons\n        char Name[128] = {0};\n    };\n    struct ImageRect {\n        int left = 0;\n        int top = 0;\n        int right = 0;\n        int bottom = 0;\n        bool Contains(const ImageRect &a) const { return left <= a.left && right >= a.right && top <= a.top && bottom >= a.bottom; }\n    };\n    struct Image {\n        ImageRect Bounds;\n        int Pixelstride = 4;\n        int RowPadding = 0;\n        // image data is BGRA, for example Data[0] = B, Data[1] =G, Data[2] = R, Data [3]=A\n        // alpha is always unused and might contain garbage\n        const unsigned char *Data = nullptr;\n    };\n\n    inline bool operator==(const ImageRect &a, const ImageRect &b)\n    {\n        return b.left == a.left && b.right == a.right && b.top == a.top && b.bottom == a.bottom;\n    }\n    // index to self in the GetMonitors() function\n    SC_LITE_EXTERN int Index(const Monitor &mointor);\n    // unique identifier\n    SC_LITE_EXTERN int Id(const Monitor &mointor);\n    SC_LITE_EXTERN int OffsetX(const Monitor &mointor);\n    SC_LITE_EXTERN int OffsetY(const Monitor &mointor);\n    SC_LITE_EXTERN const char *Name(const Monitor &mointor);\n    SC_LITE_EXTERN int Height(const Monitor &mointor);\n    SC_LITE_EXTERN int Width(const Monitor &mointor);\n\n    SC_LITE_EXTERN int Height(const ImageRect &rect);\n    SC_LITE_EXTERN int Width(const ImageRect &rect);\n\n    SC_LITE_EXTERN int Height(const Image &img);\n    SC_LITE_EXTERN int Width(const Image &img);\n    SC_LITE_EXTERN const ImageRect &Rect(const Image &img);\n\n    // number of bytes per row, NOT including the Rowpadding\n    SC_LITE_EXTERN int RowStride(const Image &img);\n    // number of bytes per row of padding\n    SC_LITE_EXTERN int RowPadding(const Image &img);\n    // the start of the image data, this is not guarenteed to be contiguos. You must use the Rowstride and rowpadding to\n    // examine the image\n    SC_LITE_EXTERN const unsigned char *StartSrc(const Image &img);\n    SC_LITE_EXTERN void Extract(const Image &img, unsigned char *dst, size_t dst_size);\n    SC_LITE_EXTERN void ExtractAndConvertToRGBA(const Image &img, unsigned char *dst, size_t dst_size);\n    SC_LITE_EXTERN void ExtractAndConvertToRGB(const Image &img, unsigned char *dst, size_t dst_size);\n    SC_LITE_EXTERN void ExtractAndConvertToRGB565(const Image &img, unsigned char *dst, size_t dst_size);\n\n    class ITimer {\n      public:\n        ITimer(){};\n        virtual ~ITimer() {}\n        virtual void start() = 0;\n        virtual void wait() = 0;\n    };\n    template <class Rep, class Period> class Timer : public ITimer {\n        std::chrono::duration<Rep, Period> Rel_Time;\n        std::chrono::time_point<std::chrono::high_resolution_clock> StartTime;\n        std::chrono::time_point<std::chrono::high_resolution_clock> StopTime;\n\n      public:\n        Timer(const std::chrono::duration<Rep, Period> &rel_time) : Rel_Time(rel_time){};\n        virtual ~Timer() {}\n        virtual void start() { StartTime = std::chrono::high_resolution_clock::now(); }\n        virtual void wait()\n        {\n            auto duration = std::chrono::duration_cast<std::chrono::duration<Rep, Period>>(std::chrono::high_resolution_clock::now() - StartTime);\n            auto timetowait = Rel_Time - duration;\n            if (timetowait.count() > 0) {\n                std::this_thread::sleep_for(timetowait);\n            }\n        }\n    };\n    // will return all attached monitors\n    SC_LITE_EXTERN std::vector<Monitor> GetMonitors();\n    // will return all windows\n    SC_LITE_EXTERN std::vector<Window> GetWindows();\n\n    SC_LITE_EXTERN bool isMonitorInsideBounds(const std::vector<Monitor> &monitors, const Monitor &monitor);\n\n    typedef std::function<void(const SL::Screen_Capture::Image &img, const Window &window)> WindowCaptureCallback;\n    typedef std::function<void(const SL::Screen_Capture::Image &img, const Monitor &monitor)> ScreenCaptureCallback;\n\n    typedef std::function<void(const SL::Screen_Capture::Image *img, const Point &point)> MouseCallback;\n\n    typedef std::function<std::vector<Monitor>()> MonitorCallback;\n    typedef std::function<std::vector<Window>()> WindowCallback;\n\n    class SC_LITE_EXTERN IScreenCaptureManager {\n      public:\n        virtual ~IScreenCaptureManager() {}\n\n        // Used by the library to determine the callback frequency\n        template <class Rep, class Period> void setFrameChangeInterval(const std::chrono::duration<Rep, Period> &rel_time)\n        {\n            setFrameChangeInterval(std::make_shared<Timer<Rep, Period>>(rel_time));\n        }\n        // Used by the library to determine the callback frequency\n        template <class Rep, class Period> void setMouseChangeInterval(const std::chrono::duration<Rep, Period> &rel_time)\n        {\n            setMouseChangeInterval(std::make_shared<Timer<Rep, Period>>(rel_time));\n        }\n\n        virtual void setFrameChangeInterval(const std::shared_ptr<ITimer> &timer) = 0;\n        virtual void setMouseChangeInterval(const std::shared_ptr<ITimer> &timer) = 0;\n\n        // Will pause all capturing\n        virtual void pause() = 0;\n        // Will return whether the library is paused\n        virtual bool isPaused() const = 0;\n        // Will resume all capturing if paused, otherwise has no effect\n        virtual void resume() = 0;\n    };\n\n    template <typename CAPTURECALLBACK> class ICaptureConfiguration {\n      public:\n        virtual ~ICaptureConfiguration() {}\n        // When a new frame is available the callback is invoked\n        virtual std::shared_ptr<ICaptureConfiguration<CAPTURECALLBACK>> onNewFrame(const CAPTURECALLBACK &cb) = 0;\n        // When a change in a frame is detected, the callback is invoked\n        virtual std::shared_ptr<ICaptureConfiguration<CAPTURECALLBACK>> onFrameChanged(const CAPTURECALLBACK &cb) = 0;\n        // When a mouse image changes or the mouse changes position, the callback is invoked.\n        virtual std::shared_ptr<ICaptureConfiguration<CAPTURECALLBACK>> onMouseChanged(const MouseCallback &cb) = 0;\n        // start capturing\n        virtual std::shared_ptr<IScreenCaptureManager> start_capturing() = 0;\n    };\n\n    // the callback of windowstocapture represents the list of monitors which should be captured. Users should return the list of monitors they want\n    // to be captured\n    SC_LITE_EXTERN std::shared_ptr<ICaptureConfiguration<ScreenCaptureCallback>> CreateCaptureConfiguration(const MonitorCallback &monitorstocapture);\n    // the callback of windowstocapture represents the list of windows which should be captured. Users should return the list of windows they want to\n    // be captured\n    SC_LITE_EXTERN std::shared_ptr<ICaptureConfiguration<WindowCaptureCallback>> CreateCaptureConfiguration(const WindowCallback &windowstocapture);\n} // namespace Screen_Capture\n} // namespace SL\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/include/ThreadManager.h",
    "content": "#pragma once\n#include \"SCCommon.h\"\n#include \"ScreenCapture.h\"\n#include <atomic>\n#include <iostream>\n#include <memory>\n#include <string>\n#include <thread>\n#include <vector>\n\nusing namespace std::chrono_literals;\n\n// this is internal stuff..\nnamespace SL {\nnamespace Screen_Capture {\n    class ThreadManager {\n\n        std::vector<std::thread> m_ThreadHandles;\n        std::shared_ptr<std::atomic_bool> TerminateThreadsEvent;\n\n      public:\n        ThreadManager();\n        ~ThreadManager();\n        void Init(const std::shared_ptr<Thread_Data> &settings);\n        void Join();\n    };\n\n    template <class T, class F, class... E> bool TryCaptureMouse(const F &data, E... args)\n    {\n        T frameprocessor;\n        auto ret = frameprocessor.Init(data);\n        if (ret != DUPL_RETURN_SUCCESS) {\n            return false;\n        }\n        frameprocessor.ImageBufferSize = frameprocessor.MaxCursurorSize * frameprocessor.MaxCursurorSize * PixelStride;\n\n        frameprocessor.OldImageBuffer = std::make_unique<unsigned char[]>(frameprocessor.ImageBufferSize);\n        frameprocessor.NewImageBuffer = std::make_unique<unsigned char[]>(frameprocessor.ImageBufferSize);\n\n        while (!data->CommonData_.TerminateThreadsEvent) {\n            // get a copy of the shared_ptr in a safe way\n\n            std::shared_ptr<ITimer> timer;\n            // i want to use if const expr here but apple is slow to update their compiler so I have to use this\n            if (sizeof...(args) == 1) {\n                timer = std::atomic_load(&data->WindowCaptureData.FrameTimer);\n            }\n            else {\n                timer = std::atomic_load(&data->ScreenCaptureData.FrameTimer);\n            }\n\n            timer->start();\n            // Process Frame\n            ret = frameprocessor.ProcessFrame();\n            if (ret != DUPL_RETURN_SUCCESS) {\n                if (ret == DUPL_RETURN_ERROR_EXPECTED) {\n                    // The system is in a transition state so request the duplication be restarted\n                    data->CommonData_.ExpectedErrorEvent = true;\n                    std::cout << \"Exiting Thread due to expected error \" << std::endl;\n                }\n                else {\n                    // Unexpected error so exit the application\n                    data->CommonData_.UnexpectedErrorEvent = true;\n                    std::cout << \"Exiting Thread due to Unexpected error \" << std::endl;\n                }\n                return true;\n            }\n            timer->wait();\n            while (data->CommonData_.Paused) {\n                std::this_thread::sleep_for(50ms);\n            }\n        }\n        return true;\n    }\n\n    inline bool HasMonitorsChanged(const std::vector<Monitor> &startmonitors, const std::vector<Monitor> &nowmonitors)\n    {\n        if (startmonitors.size() != nowmonitors.size())\n            return true;\n        for (size_t i = 0; i < startmonitors.size(); i++) {\n            if (startmonitors[i].Height != nowmonitors[i].Height || startmonitors[i].Id != nowmonitors[i].Id ||\n                startmonitors[i].Index != nowmonitors[i].Index || startmonitors[i].OffsetX != nowmonitors[i].OffsetX ||\n                startmonitors[i].OffsetY != nowmonitors[i].OffsetY || startmonitors[i].Width != nowmonitors[i].Width)\n                return true;\n        }\n        return false;\n    }\n    template <class T, class F> bool TryCaptureMonitor(const F &data, Monitor &monitor)\n    {\n        T frameprocessor;\n        auto startmonitors = GetMonitors();\n        auto ret = frameprocessor.Init(data, monitor);\n        if (ret != DUPL_RETURN_SUCCESS) {\n            return false;\n        }\n        frameprocessor.ImageBufferSize = Width(monitor) * Height(monitor) * PixelStride;\n        if (data->ScreenCaptureData.OnFrameChanged) { // only need the old buffer if difs are needed. If no dif is needed, then the\n                                                      // image is always new\n            frameprocessor.OldImageBuffer = std::make_unique<unsigned char[]>(frameprocessor.ImageBufferSize);\n            frameprocessor.NewImageBuffer = std::make_unique<unsigned char[]>(frameprocessor.ImageBufferSize);\n        }\n        if ((data->ScreenCaptureData.OnNewFrame) && !frameprocessor.NewImageBuffer) {\n            frameprocessor.NewImageBuffer = std::make_unique<unsigned char[]>(frameprocessor.ImageBufferSize);\n        }\n        while (!data->CommonData_.TerminateThreadsEvent) {\n            // get a copy of the shared_ptr in a safe way\n            auto timer = std::atomic_load(&data->ScreenCaptureData.FrameTimer);\n            timer->start();\n            auto monitors = GetMonitors();\n            if (isMonitorInsideBounds(monitors, monitor) && !HasMonitorsChanged(startmonitors, monitors)) {\n                ret = frameprocessor.ProcessFrame(monitors[Index(monitor)]);\n            }\n            else {\n                // something happened, rebuild\n                ret = DUPL_RETURN_ERROR_EXPECTED;\n            }\n\n            if (ret != DUPL_RETURN_SUCCESS) {\n                if (ret == DUPL_RETURN_ERROR_EXPECTED) {\n                    // The system is in a transition state so request the duplication be restarted\n                    data->CommonData_.ExpectedErrorEvent = true;\n                    std::cout << \"Exiting Thread due to expected error \" << std::endl;\n                }\n                else {\n                    // Unexpected error so exit the application\n                    data->CommonData_.UnexpectedErrorEvent = true;\n                    std::cout << \"Exiting Thread due to Unexpected error \" << std::endl;\n                }\n                return true;\n            }\n            timer->wait();\n            while (data->CommonData_.Paused) {\n                std::this_thread::sleep_for(50ms);\n            }\n        }\n        return true;\n    }\n\n    template <class T, class F> bool TryCaptureWindow(const F &data, Window &wnd)\n    {\n        T frameprocessor;\n        auto ret = frameprocessor.Init(data, wnd);\n        if (ret != DUPL_RETURN_SUCCESS) {\n            return false;\n        }\n\n        frameprocessor.ImageBufferSize = wnd.Size.x * wnd.Size.y * PixelStride;\n        if (data->WindowCaptureData.OnFrameChanged) { // only need the old buffer if difs are needed. If no dif is needed, then the\n                                                      // image is always new\n            frameprocessor.OldImageBuffer = std::make_unique<unsigned char[]>(frameprocessor.ImageBufferSize);\n            frameprocessor.NewImageBuffer = std::make_unique<unsigned char[]>(frameprocessor.ImageBufferSize);\n        }\n        if ((data->WindowCaptureData.OnNewFrame) && !frameprocessor.NewImageBuffer) {\n            frameprocessor.NewImageBuffer = std::make_unique<unsigned char[]>(frameprocessor.ImageBufferSize);\n        }\n        while (!data->CommonData_.TerminateThreadsEvent) {\n            // get a copy of the shared_ptr in a safe way\n            auto timer = std::atomic_load(&data->WindowCaptureData.FrameTimer);\n            timer->start();\n            ret = frameprocessor.ProcessFrame(wnd);\n            if (ret != DUPL_RETURN_SUCCESS) {\n                if (ret == DUPL_RETURN_ERROR_EXPECTED) {\n                    // The system is in a transition state so request the duplication be restarted\n                    data->CommonData_.ExpectedErrorEvent = true;\n                    std::cout << \"Exiting Thread due to expected error \" << std::endl;\n                }\n                else {\n                    // Unexpected error so exit the application\n                    data->CommonData_.UnexpectedErrorEvent = true;\n                    std::cout << \"Exiting Thread due to Unexpected error \" << std::endl;\n                }\n                return true;\n            }\n            timer->wait();\n            while (data->CommonData_.Paused) {\n                std::this_thread::sleep_for(50ms);\n            }\n        }\n        return true;\n    }\n\n    void RunCaptureMonitor(std::shared_ptr<Thread_Data> data, Monitor monitor);\n    void RunCaptureWindow(std::shared_ptr<Thread_Data> data, Window window);\n\n    void RunCaptureMouse(std::shared_ptr<Thread_Data> data);\n} // namespace Screen_Capture\n} // namespace SL\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/include/ios/CGFrameProcessor.h",
    "content": "#pragma once\n#include \"SCCommon.h\"\n#include <memory>\n\nnamespace SL {\n    namespace Screen_Capture {\n        class CGFrameProcessor: public BaseFrameProcessor {\n            Monitor SelectedMonitor;\n        public:\n            \n            DUPL_RETURN Init(std::shared_ptr<Thread_Data> data, Monitor& monitor);\n            DUPL_RETURN ProcessFrame(const Monitor& curentmonitorinfo);\n            DUPL_RETURN Init(std::shared_ptr<Thread_Data> data, Window& window);\n            DUPL_RETURN ProcessFrame(const Window& window);\n        };\n\n    }\n}\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/include/ios/NSMouseCapture.h",
    "content": "//\n//  Header.h\n//  Screen_Capture\n//\n//  Created by scott lee on 1/11/17.\n//\n//\n\n#ifndef Header_h\n#define Header_h\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n    void SLScreen_Capture_InitMouseCapture();\n    CGImageRef SLScreen_Capture_GetCurrentMouseImage();\n#ifdef __cplusplus\n}\n#endif\n\n\n#endif /* Header_h */\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/include/ios/NSMouseProcessor.h",
    "content": "#pragma once\n#include \"SCCommon.h\"\n#include <memory>\n#include \"TargetConditionals.h\"\n#include <ApplicationServices/ApplicationServices.h>\n\nnamespace SL {\n    namespace Screen_Capture {\n        \n        class NSMouseProcessor: public BaseFrameProcessor {\n            int Last_x =0;\n            int Last_y =0;\n            \n        public:\n            const int MaxCursurorSize=32;\n            DUPL_RETURN Init(std::shared_ptr<Thread_Data> data);\n            DUPL_RETURN ProcessFrame();\n\n        };\n\n    }\n}\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/include/linux/X11FrameProcessor.h",
    "content": "#pragma once\n#include \"SCCommon.h\"\n#include <memory>\n#include <X11/Xlib.h>\n#include <sys/shm.h>\n#include <X11/extensions/XShm.h>\n\nnamespace SL {\n    namespace Screen_Capture {\n   \n        class X11FrameProcessor: public BaseFrameProcessor {\n            \n\t\t\tDisplay* SelectedDisplay=nullptr;\n            XID SelectedWindow = 0;\n\t\t\tXImage* Image=nullptr;\n\t\t\tstd::unique_ptr<XShmSegmentInfo> ShmInfo;\n            Monitor SelectedMonitor;\n            \n        public:\n            X11FrameProcessor();\n            ~X11FrameProcessor();\n            DUPL_RETURN Init(std::shared_ptr<Thread_Data> data, Monitor& monitor);\n            DUPL_RETURN ProcessFrame(const Monitor& currentmonitorinfo);\n            DUPL_RETURN Init(std::shared_ptr<Thread_Data> data, const Window& selectedwindow);\n            DUPL_RETURN ProcessFrame(Window& selectedwindow);\n        };\n    }\n}"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/include/linux/X11MouseProcessor.h",
    "content": "#pragma once\n#include \"SCCommon.h\"\n#include <memory>\n#include <X11/X.h>\n#include <X11/extensions/Xfixes.h>\n\nnamespace SL {\n    namespace Screen_Capture {\n        \n        class X11MouseProcessor: public BaseFrameProcessor {\n            Display* SelectedDisplay=nullptr;\n            XID RootWindow;\n            int Last_x = 0;\n            int Last_y =0;\n            \n        public:\n            const int MaxCursurorSize =32;\n            X11MouseProcessor();\n            ~X11MouseProcessor();\n            DUPL_RETURN Init(std::shared_ptr<Thread_Data> data);\n            DUPL_RETURN ProcessFrame();\n\n        };\n\n    }\n}"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/include/windows/DXFrameProcessor.h",
    "content": "#pragma once\n#include \"SCCommon.h\"\n#include <memory>\n\n#define NOMINMAX\n#define WIN32_LEAN_AND_MEAN\n#include <windows.h>\n#include <d3d11.h>\n#include <dxgi1_2.h>\n#include <wrl.h>\n\n#pragma comment(lib,\"dxgi.lib\")\n#pragma comment(lib,\"d3d11.lib\")\n\nnamespace SL {\n    namespace Screen_Capture {\n        class DXFrameProcessor: public BaseFrameProcessor {\n            Microsoft::WRL::ComPtr<ID3D11Device> Device;\n            Microsoft::WRL::ComPtr<ID3D11DeviceContext> DeviceContext;\n            Microsoft::WRL::ComPtr<ID3D11Texture2D> StagingSurf;\n\n            Microsoft::WRL::ComPtr<IDXGIOutputDuplication> OutputDuplication;\n            DXGI_OUTPUT_DESC OutputDesc;\n            UINT Output;\n            std::vector<BYTE> MetaDataBuffer;\n            Monitor SelectedMonitor;\n\n        public:\n            DUPL_RETURN Init(std::shared_ptr<Thread_Data> data, Monitor& monitor);\n            DUPL_RETURN ProcessFrame(const Monitor& currentmonitorinfo);\n\n        };\n\n    }\n}"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/include/windows/GDIFrameProcessor.h",
    "content": "#pragma once\n#include \"ScreenCapture.h\"\n#include \"SCCommon.h\"\n#include <memory>\n#include \"GDIHelpers.h\"\n\nnamespace SL {\n    namespace Screen_Capture {\n\n        class GDIFrameProcessor : public BaseFrameProcessor {\n            HDCWrapper MonitorDC;\n            HDCWrapper CaptureDC;\n            HBITMAPWrapper CaptureBMP;\n            Monitor SelectedMonitor;\n            HWND SelectedWindow;\n\n            std::shared_ptr<Thread_Data> Data;\n        public:\n            DUPL_RETURN Init(std::shared_ptr<Thread_Data> data, const Monitor& monitor);\n            DUPL_RETURN ProcessFrame(const Monitor& currentmonitorinfo);\n            DUPL_RETURN Init(std::shared_ptr<Thread_Data> data, const Window& selectedwindow);\n            DUPL_RETURN ProcessFrame(Window& selectedwindow);\n        };\n    }\n}"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/include/windows/GDIHelpers.h",
    "content": "#pragma once\n\n#define NOMINMAX\n#define WIN32_LEAN_AND_MEAN\n#include <windows.h>\n#include <Dwmapi.h>\nnamespace SL {\n    namespace Screen_Capture {\n        class HDCWrapper {\n        public:\n            HDCWrapper() : DC(nullptr) {}\n            ~HDCWrapper() { if (DC != nullptr) { DeleteDC(DC); } }\n            HDC DC;\n        };\n        class HBITMAPWrapper {\n        public:\n            HBITMAPWrapper() : Bitmap(nullptr) {}\n            ~HBITMAPWrapper() { if (Bitmap != nullptr) { DeleteObject(Bitmap); } }\n            HBITMAP Bitmap;\n        };\n        struct WindowDimensions {\n            RECT ClientRect;\n            RECT ClientBorder;\n            WINDOWPLACEMENT Placement;\n        };\n\n        inline WindowDimensions GetWindowRect(HWND hwnd) {\n            WindowDimensions ret = { 0 };\n            GetWindowRect(hwnd, &ret.ClientRect);\n            ret.Placement.length = sizeof(WINDOWPLACEMENT);\n            GetWindowPlacement(hwnd, &ret.Placement);\n\n            RECT frame = { 0 };\n            if (SUCCEEDED(DwmGetWindowAttribute(hwnd, DWMWA_EXTENDED_FRAME_BOUNDS, &frame, sizeof(frame)))) {\n\n                ret.ClientBorder.left = frame.left - ret.ClientRect.left;\n                ret.ClientBorder.top = frame.top - ret.ClientRect.top;\n                ret.ClientBorder.right = ret.ClientRect.right - frame.right;\n                ret.ClientBorder.bottom = ret.ClientRect.bottom - frame.bottom;\n            }\n\n            ret.ClientRect.bottom -= ret.ClientBorder.bottom;\n            ret.ClientRect.top += ret.ClientBorder.top;\n            ret.ClientRect.left += ret.ClientBorder.left;\n            ret.ClientRect.right -= ret.ClientBorder.right;\n\n            return ret;\n        }\n    }\n}"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/include/windows/GDIMouseProcessor.h",
    "content": "#pragma once\n#include \"ScreenCapture.h\"\n#include \"SCCommon.h\"\n#include <memory>\n#include \"GDIHelpers.h\"\n\nnamespace SL {\n    namespace Screen_Capture {\n\n        class GDIMouseProcessor : public BaseFrameProcessor {\n\n            HDCWrapper MonitorDC;\n            HDCWrapper CaptureDC;\n            std::shared_ptr<Thread_Data> Data;\n\n            int Last_x = 0;\n            int Last_y = 0;\n\n        public:\n\n            const int MaxCursurorSize = 32;\n            DUPL_RETURN Init(std::shared_ptr<Thread_Data> data);\n            DUPL_RETURN ProcessFrame();\n\n      \n        };\n    }\n}"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/include/windows/GDIWindowProcessor.h",
    "content": "#pragma once\n#include \"ScreenCapture.h\"\n#include \"SCCommon.h\"\n#include <memory>\n#include \"GDIHelpers.h\"\n\nnamespace SL {\n    namespace Screen_Capture {\n\n        class GDIWindowProcessor : public BaseFrameProcessor {\n            HDCWrapper MonitorDC;\n            HDCWrapper CaptureDC;\n            HBITMAPWrapper CaptureBMP;\n            Window Window_;\n            HWND SelectedWindow;\n            std::shared_ptr<Thread_Data> Data;\n        public:\n            DUPL_RETURN Init(std::shared_ptr<Thread_Data> data, Window selectedwindow);\n            DUPL_RETURN ProcessFrame(Window selectedwindow);\n\n        };\n    }\n}"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/SCCommon.cpp",
    "content": "#include \"SCCommon.h\"\n#include <algorithm>\n#include <assert.h>\n#include <chrono>\n#include <iostream>\n#include <string.h>\n\nnamespace SL {\nnamespace Screen_Capture {\n\n    void SanitizeRects(std::vector<ImageRect> &rects, const Image &img)\n    {\n        for (auto &r : rects) {\n            if (r.right > Width(img)) {\n                r.right = Width(img);\n            }\n            if (r.bottom > Height(img)) {\n                r.bottom = Height(img);\n            }\n        }\n    }\n#define maxdist 256\n\n    std::vector<ImageRect> GetDifs(const Image &oldimg, const Image &newimg)\n    {\n        std::vector<ImageRect> rects;\n        if (Width(oldimg) != Width(newimg) || Height(oldimg) != Height(newimg)) {\n            rects.push_back(Rect(newimg));\n            return rects;\n        }\n        rects.reserve((Height(newimg) / maxdist) + 1 * (Width(newimg) / maxdist) + 1);\n\n        auto oldimg_ptr = (const int *)StartSrc(oldimg);\n        auto newimg_ptr = (const int *)StartSrc(newimg);\n\n        auto imgwidth = Width(oldimg);\n        auto imgheight = Height(oldimg);\n\n        for (decltype(imgheight) row = 0; row < imgheight; row += maxdist) {\n            for (decltype(imgwidth) col = 0; col < imgwidth; col += maxdist) {\n\n                for (decltype(row) y = row; y < maxdist + row && y < imgheight; y++) {\n                    for (decltype(col) x = col; x < maxdist + col && x < imgwidth; x++) {\n                        auto old = oldimg_ptr[y * imgwidth + x];\n                        auto ne = newimg_ptr[y * imgwidth + x];\n                        if (ne != old) {\n                            ImageRect re;\n\n                            re.left = col;\n                            re.top = row;\n                            re.bottom = row + maxdist;\n                            re.right = col + maxdist;\n                            rects.push_back(re);\n                            y += maxdist;\n                            x += maxdist;\n                        }\n                    }\n                }\n            }\n        }\n\n        if (rects.size() <= 2) {\n            SanitizeRects(rects, newimg);\n            return rects; // make sure there is at least 2\n        }\n\n        std::vector<ImageRect> outrects;\n        outrects.reserve(rects.size());\n        outrects.push_back(rects[0]);\n        // horizontal scan\n        int expandcount = 0;\n        int containedcount = 0;\n        for (size_t i = 1; i < rects.size(); i++) {\n            if (outrects.back().right == rects[i].left && outrects.back().bottom == rects[i].bottom) {\n                outrects.back().right = rects[i].right;\n                expandcount++;\n            }\n            else {\n                outrects.push_back(rects[i]);\n                containedcount++;\n            }\n        }\n\n        if (expandcount != 0 || containedcount != 0) {\n            // std::cout << \"On Horizontal Scan expandcount \" << expandcount << \"\n            // containedcount \" << containedcount << std::endl;\n        }\n        expandcount = 0;\n        containedcount = 0;\n\n        if (outrects.size() <= 2) {\n            SanitizeRects(outrects, newimg);\n            return outrects; // make sure there is at least 2\n        }\n        rects.resize(0);\n        // vertical scan\n        for (auto &otrect : outrects) {\n\n            auto found = std::find_if(rects.rbegin(), rects.rend(), [=](const ImageRect &rec) {\n                return rec.bottom == otrect.top && rec.left == otrect.left && rec.right == otrect.right;\n            });\n            if (found == rects.rend()) {\n                rects.push_back(otrect);\n                containedcount++;\n            }\n            else {\n                found->bottom = otrect.bottom;\n                expandcount++;\n            }\n        }\n        if (expandcount != 0 || containedcount != 0) {\n            // std::cout << \"On Horizontal Scan expandcount \" << expandcount << \"\n            // containedcount \" << containedcount << std::endl;\n        }\n        /*\tfor (auto& r : rects) {\n          std::cout << r << std::endl;\n      }\n      */\n        // std::cout << \"Found \" << rects.size() << \" rects in img dif. It took \" <<\n        // elapsed.count() << \" milliseconds to compare run GetDifs \";\n\n        SanitizeRects(rects, newimg);\n        return rects;\n    }\n\n    Monitor CreateMonitor(int index, int id, int h, int w, int ox, int oy, const std::string &n, float scaling)\n    {\n        Monitor ret = {};\n        ret.Index = index;\n        ret.Height = h;\n        ret.Id = id;\n        assert(n.size() + 1 < sizeof(ret.Name));\n        memcpy(ret.Name, n.c_str(), n.size() + 1);\n        ret.OffsetX = ox;\n        ret.OffsetY = oy;\n        ret.Width = w;\n        ret.Scaling = scaling;\n        return ret;\n    }\n\n    Image Create(const ImageRect &imgrect, int pixelstride, int rowpadding, const unsigned char *data)\n    {\n        Image ret;\n        ret.Bounds = imgrect;\n        ret.Data = data;\n        ret.Pixelstride = pixelstride;\n        ret.RowPadding = rowpadding;\n        return ret;\n    }\n    int Index(const Monitor &mointor) { return mointor.Index; }\n    int Id(const Monitor &mointor) { return mointor.Id; }\n    int OffsetX(const Monitor &mointor) { return mointor.OffsetX; }\n    int OffsetY(const Monitor &mointor) { return mointor.OffsetY; }\n    const char *Name(const Monitor &mointor) { return mointor.Name; }\n    int Height(const Monitor &mointor) { return mointor.Height; }\n    int Width(const Monitor &mointor) { return mointor.Width; }\n    int Height(const ImageRect &rect) { return rect.bottom - rect.top; }\n    int Width(const ImageRect &rect) { return rect.right - rect.left; }\n    int Height(const Image &img) { return Height(img.Bounds); }\n    int Width(const Image &img) { return Width(img.Bounds); }\n    const ImageRect &Rect(const Image &img) { return img.Bounds; }\n\n    // number of bytes per row, NOT including the Rowpadding\n    int RowStride(const Image &img) { return img.Pixelstride * Width(img); }\n    // number of bytes per row of padding\n    int RowPadding(const Image &img) { return img.RowPadding; }\n    const unsigned char *StartSrc(const Image &img) { return img.Data; }\n} // namespace Screen_Capture\n} // namespace SL\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/ScreenCapture.cpp",
    "content": "#include \"SCCommon.h\"\n#include \"ScreenCapture.h\"\n#include \"ThreadManager.h\"\n#include <algorithm>\n#include <assert.h>\n#include <atomic>\n#include <cstring>\n#include <iostream>\n#include <memory>\n#include <thread>\n\nnamespace SL {\nnamespace Screen_Capture {\n\n    void Extract(const Image &img, unsigned char *dst, size_t dst_size)\n    {\n        assert(dst_size >= static_cast<size_t>(RowStride(img) * Height(img)));\n        auto startdst = dst;\n        auto startsrc = StartSrc(img);\n        if (RowPadding(img) == 0) { // no padding, the entire copy can be a single memcpy call\n            memcpy(startdst, startsrc, RowStride(img) * Height(img));\n        }\n        else {\n            for (auto i = 0; i < Height(img); i++) {\n                memcpy(startdst, startsrc, RowStride(img));\n                startdst += RowStride(img);                   // advance to the next row\n                startsrc += RowStride(img) + RowPadding(img); // advance to the next row\n            }\n        }\n    }\n\n    void ExtractAndConvertToRGBA(const Image &img, unsigned char *dst, size_t dst_size)\n    {\n\n        assert(dst_size >= static_cast<size_t>(RowStride(img) * Height(img)));\n        auto imgsrc = StartSrc(img);\n        auto imgdist = dst;\n        for (auto h = 0; h < Height(img); h++) {\n            for (auto w = 0; w < Width(img); w++) {\n                *imgdist++ = *(imgsrc + 2);\n                *imgdist++ = *(imgsrc + 1);\n                *imgdist++ = *(imgsrc);\n                *imgdist++ = 0; // alpha should be zero\n                imgsrc += img.Pixelstride;\n            }\n            imgsrc += RowPadding(img);\n        }\n    }\n    void ExtractAndConvertToRGB(const Image &img, unsigned char *dst, size_t dst_size)\n    {\n        assert(dst_size >= static_cast<size_t>(Width(img) * 3 * Height(img)));\n        auto imgsrc = StartSrc(img);\n        auto imgdist = dst;\n        for (auto h = 0; h < Height(img); h++) {\n            for (auto w = 0; w < Width(img); w++) {\n                *imgdist++ = *(imgsrc + 2);\n                *imgdist++ = *(imgsrc + 1);\n                *imgdist++ = *(imgsrc);\n                imgsrc += img.Pixelstride;\n            }\n            imgsrc += RowPadding(img);\n        }\n    }\n\n    void ExtractAndConvertToRGB565(const Image &img, unsigned char *dst, size_t dst_size)\n    {\n        assert(dst_size >= static_cast<size_t>(Width(img) * 2 * Height(img)));\n        auto imgsrc = StartSrc(img);\n        auto imgdist = dst;\n        for (auto h = 0; h < Height(img); h++) {\n            for (auto w = 0; w < Width(img); w++) {\n                unsigned char r = (*(imgsrc + 2)) & 0xF8;\n                unsigned char g = (*(imgsrc + 1)) & 0xFC;\n                unsigned char b = (*imgsrc) & 0xF8;\n                int short rgb = (r << 8) | (g << 3) | (b >> 3);\n                *imgdist++ = static_cast<unsigned char>(rgb);\n                *imgdist++ = static_cast<unsigned char>(rgb >> 8);\n                imgsrc += img.Pixelstride;\n            }\n            imgsrc += RowPadding(img);\n        }\n    }\n\n    bool isMonitorInsideBounds(const std::vector<Monitor> &monitors, const Monitor &monitor)\n    {\n\n        auto totalwidth = 0;\n        for (auto &m : monitors) {\n            totalwidth += Width(m);\n        }\n\n        // if the monitor doesnt exist any more!\n        if (std::find_if(begin(monitors), end(monitors), [&](auto &m) { return m.Id == monitor.Id; }) == end(monitors)) {\n            return false;\n        } // if the area to capture is outside the dimensions of the desktop!!\n        auto &realmonitor = monitors[Index(monitor)];\n        if (Height(realmonitor) < Height(monitor) ||          // monitor height check\n            totalwidth < Width(monitor) + OffsetX(monitor) || // total width check\n            Width(monitor) > Width(realmonitor))              // regular width check\n\n        {\n            return false;\n        } // if the entire screen is capture and the offsets changed, get out and rebuild\n        else if (Height(realmonitor) == Height(monitor) && Width(realmonitor) == Width(monitor) &&\n                 (OffsetX(realmonitor) != OffsetX(monitor) || OffsetY(realmonitor) != OffsetY(monitor))) {\n            return false;\n        }\n        return true;\n    }\n    static bool ScreenCaptureManagerExists = false;\n    class ScreenCaptureManager : public IScreenCaptureManager {\n        \n\n      public:\n        // allreads share the same data!!!\n        std::shared_ptr<Thread_Data> Thread_Data_;\n\n        std::thread Thread_;\n\n        ScreenCaptureManager()\n        {\n            //you must ONLY HAVE ONE INSTANCE RUNNING AT A TIME. Destroy the first instance then create one!\n            assert(!ScreenCaptureManagerExists);\n            ScreenCaptureManagerExists = true;\n            Thread_Data_ = std::make_shared<Thread_Data>();\n            Thread_Data_->CommonData_.Paused = false;\n            Thread_Data_->ScreenCaptureData.FrameTimer = std::make_shared<Timer<long long, std::milli>>(100ms);\n            Thread_Data_->ScreenCaptureData.MouseTimer = std::make_shared<Timer<long long, std::milli>>(50ms);\n            Thread_Data_->WindowCaptureData.FrameTimer = std::make_shared<Timer<long long, std::milli>>(100ms);\n            Thread_Data_->WindowCaptureData.MouseTimer = std::make_shared<Timer<long long, std::milli>>(50ms);\n        }\n        virtual ~ScreenCaptureManager()\n        {\n            Thread_Data_->CommonData_.TerminateThreadsEvent = true; // set the exit flag for the threads\n            Thread_Data_->CommonData_.Paused = false;               // unpaused the threads to let everything exit\n            if (Thread_.get_id() == std::this_thread::get_id()) {\n                Thread_.detach();\n            }\n            else if (Thread_.joinable()) {\n                Thread_.join();\n            }\n            ScreenCaptureManagerExists = false;\n        }\n        void start()\n        {\n            Thread_ = std::thread([&]() {\n                ThreadManager ThreadMgr;\n\n                ThreadMgr.Init(Thread_Data_);\n\n                while (!Thread_Data_->CommonData_.TerminateThreadsEvent) {\n\n                    if (Thread_Data_->CommonData_.ExpectedErrorEvent) {\n                        Thread_Data_->CommonData_.TerminateThreadsEvent = true;\n                        ThreadMgr.Join();\n                        Thread_Data_->CommonData_.ExpectedErrorEvent = Thread_Data_->CommonData_.UnexpectedErrorEvent =\n                            Thread_Data_->CommonData_.TerminateThreadsEvent = false;\n                        // Clean up\n                        std::this_thread::sleep_for(std::chrono::milliseconds(1000)); // sleep for 1 second since an error occcured\n\n                        ThreadMgr.Init(Thread_Data_);\n                    }\n                    std::this_thread::sleep_for(std::chrono::milliseconds(50));\n                }\n                Thread_Data_->CommonData_.TerminateThreadsEvent = true;\n                ThreadMgr.Join();\n            });\n        }\n        virtual void setFrameChangeInterval(const std::shared_ptr<ITimer> &timer) override\n        {\n            std::atomic_store(&Thread_Data_->ScreenCaptureData.FrameTimer, timer);\n            std::atomic_store(&Thread_Data_->WindowCaptureData.FrameTimer, timer);\n        }\n        virtual void setMouseChangeInterval(const std::shared_ptr<ITimer> &timer) override\n        {\n            std::atomic_store(&Thread_Data_->ScreenCaptureData.MouseTimer, timer);\n            std::atomic_store(&Thread_Data_->WindowCaptureData.MouseTimer, timer);\n        }\n        virtual void pause() override { Thread_Data_->CommonData_.Paused = true; }\n        virtual bool isPaused() const override { return Thread_Data_->CommonData_.Paused; }\n        virtual void resume() override { Thread_Data_->CommonData_.Paused = false; }\n    };\n\n    class ScreenCaptureConfiguration : public ICaptureConfiguration<ScreenCaptureCallback> {\n        std::shared_ptr<ScreenCaptureManager> Impl_;\n\n      public:\n        ScreenCaptureConfiguration(const std::shared_ptr<ScreenCaptureManager> &impl) : Impl_(impl) {}\n\n        virtual std::shared_ptr<ICaptureConfiguration<ScreenCaptureCallback>> onNewFrame(const ScreenCaptureCallback &cb) override\n        {\n            assert(!Impl_->Thread_Data_->ScreenCaptureData.OnNewFrame);\n            Impl_->Thread_Data_->ScreenCaptureData.OnNewFrame = cb;\n            return std::make_shared<ScreenCaptureConfiguration>(Impl_);\n        }\n        virtual std::shared_ptr<ICaptureConfiguration<ScreenCaptureCallback>> onFrameChanged(const ScreenCaptureCallback &cb) override\n        {\n            assert(!Impl_->Thread_Data_->ScreenCaptureData.OnFrameChanged);\n            Impl_->Thread_Data_->ScreenCaptureData.OnFrameChanged = cb;\n            return std::make_shared<ScreenCaptureConfiguration>(Impl_);\n        }\n        virtual std::shared_ptr<ICaptureConfiguration<ScreenCaptureCallback>> onMouseChanged(const MouseCallback &cb) override\n        {\n            assert(!Impl_->Thread_Data_->ScreenCaptureData.OnMouseChanged);\n            Impl_->Thread_Data_->ScreenCaptureData.OnMouseChanged = cb;\n            return std::make_shared<ScreenCaptureConfiguration>(Impl_);\n        }\n        virtual std::shared_ptr<IScreenCaptureManager> start_capturing() override\n        {\n            assert(Impl_->Thread_Data_->ScreenCaptureData.OnMouseChanged || Impl_->Thread_Data_->ScreenCaptureData.OnFrameChanged ||\n                   Impl_->Thread_Data_->ScreenCaptureData.OnNewFrame);\n            Impl_->start();\n            return Impl_;\n        }\n    };\n\n    class WindowCaptureConfiguration : public ICaptureConfiguration<WindowCaptureCallback> {\n        std::shared_ptr<ScreenCaptureManager> Impl_;\n\n      public:\n        WindowCaptureConfiguration(const std::shared_ptr<ScreenCaptureManager> &impl) : Impl_(impl) {}\n\n        virtual std::shared_ptr<ICaptureConfiguration<WindowCaptureCallback>> onNewFrame(const WindowCaptureCallback &cb) override\n        {\n            assert(!Impl_->Thread_Data_->WindowCaptureData.OnNewFrame);\n            Impl_->Thread_Data_->WindowCaptureData.OnNewFrame = cb;\n            return std::make_shared<WindowCaptureConfiguration>(Impl_);\n        }\n        virtual std::shared_ptr<ICaptureConfiguration<WindowCaptureCallback>> onFrameChanged(const WindowCaptureCallback &cb) override\n        {\n            assert(!Impl_->Thread_Data_->WindowCaptureData.OnFrameChanged);\n            Impl_->Thread_Data_->WindowCaptureData.OnFrameChanged = cb;\n            return std::make_shared<WindowCaptureConfiguration>(Impl_);\n        }\n        virtual std::shared_ptr<ICaptureConfiguration<WindowCaptureCallback>> onMouseChanged(const MouseCallback &cb) override\n        {\n\n            assert(!Impl_->Thread_Data_->WindowCaptureData.OnMouseChanged);\n            Impl_->Thread_Data_->WindowCaptureData.OnMouseChanged = cb;\n            return std::make_shared<WindowCaptureConfiguration>(Impl_);\n        }\n        virtual std::shared_ptr<IScreenCaptureManager> start_capturing() override\n        {\n            assert(Impl_->Thread_Data_->WindowCaptureData.OnMouseChanged || Impl_->Thread_Data_->WindowCaptureData.OnFrameChanged ||\n                   Impl_->Thread_Data_->WindowCaptureData.OnNewFrame);\n            Impl_->start();\n            return Impl_;\n        }\n    };\n    std::shared_ptr<ICaptureConfiguration<ScreenCaptureCallback>> CreateCaptureConfiguration(const MonitorCallback &monitorstocapture)\n    {\n        auto impl = std::make_shared<ScreenCaptureManager>();\n        impl->Thread_Data_->ScreenCaptureData.getThingsToWatch = monitorstocapture;\n        return std::make_shared<ScreenCaptureConfiguration>(impl);\n    }\n\n    std::shared_ptr<ICaptureConfiguration<WindowCaptureCallback>> CreateCaptureConfiguration(const WindowCallback &windowtocapture)\n    {\n        auto impl = std::make_shared<ScreenCaptureManager>();\n        impl->Thread_Data_->WindowCaptureData.getThingsToWatch = windowtocapture;\n        return std::make_shared<WindowCaptureConfiguration>(impl);\n    }\n} // namespace Screen_Capture\n} // namespace SL\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/ThreadManager.cpp",
    "content": "#include \"ThreadManager.h\"\n#include <assert.h>\n#include <algorithm>\n\nSL::Screen_Capture::ThreadManager::ThreadManager()\n{\n}\nSL::Screen_Capture::ThreadManager::~ThreadManager()\n{\n    Join();\n}\n\nvoid SL::Screen_Capture::ThreadManager::Init(const std::shared_ptr<Thread_Data>& data)\n{\n    assert(m_ThreadHandles.empty());\n\n    if (data->ScreenCaptureData.getThingsToWatch) {\n        auto monitors = data->ScreenCaptureData.getThingsToWatch();\n        auto mons = GetMonitors();\n        for (auto& m : monitors) {\n            assert(isMonitorInsideBounds(mons, m));\n        }\n\n        m_ThreadHandles.resize(monitors.size() + (data->ScreenCaptureData.OnMouseChanged ? 1 : 0)); // add another thread for mouse capturing if needed\n\n        for (size_t i = 0; i < monitors.size(); ++i) {\n            m_ThreadHandles[i] = std::thread(&SL::Screen_Capture::RunCaptureMonitor, data, monitors[i]);\n        }\n        if (data->ScreenCaptureData.OnMouseChanged) {\n            m_ThreadHandles.back() = std::thread([data] {\n                SL::Screen_Capture::RunCaptureMouse(data);\n            });\n        }\n\n    }\n    else if (data->WindowCaptureData.getThingsToWatch) {\n        auto windows = data->WindowCaptureData.getThingsToWatch();\n        m_ThreadHandles.resize(windows.size() + (data->WindowCaptureData.OnMouseChanged ? 1 : 0)); // add another thread for mouse capturing if needed\n        for (size_t i = 0; i < windows.size(); ++i) {\n            m_ThreadHandles[i] = std::thread(&SL::Screen_Capture::RunCaptureWindow, data, windows[i]);\n        }\n        if (data->WindowCaptureData.OnMouseChanged) {\n            m_ThreadHandles.back() = std::thread([data] {\n                SL::Screen_Capture::RunCaptureMouse(data);\n            });\n        }\n    }\n}\n\nvoid SL::Screen_Capture::ThreadManager::Join()\n{\n    for (auto& t : m_ThreadHandles) {\n        if (t.joinable()) {\n            if (t.get_id() == std::this_thread::get_id()) {\n                t.detach();// will run to completion\n            }\n            else {\n                t.join();\n            }\n        }\n    }\n    m_ThreadHandles.clear();\n}\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/ios/CGFrameProcessor.cpp",
    "content": "#include \"CGFrameProcessor.h\"\n#include \"TargetConditionals.h\"\n#include <ApplicationServices/ApplicationServices.h>\n#include <iostream>\n\nnamespace SL {\nnamespace Screen_Capture {\n\n    DUPL_RETURN CGFrameProcessor::Init(std::shared_ptr<Thread_Data> data, Monitor &monitor)\n    {\n        auto ret = DUPL_RETURN::DUPL_RETURN_SUCCESS;\n        Data = data;\n        SelectedMonitor = monitor;\n        return ret;\n    }\n\n    DUPL_RETURN CGFrameProcessor::Init(std::shared_ptr<Thread_Data> data, Window &window)\n    {\n        auto ret = DUPL_RETURN::DUPL_RETURN_SUCCESS;\n        Data = data;\n        return ret;\n    }\n    DUPL_RETURN CGFrameProcessor::ProcessFrame(const Monitor &curentmonitorinfo)\n    {\n        auto Ret = DUPL_RETURN_SUCCESS;\n\n        auto imageRef = CGDisplayCreateImage(Id(SelectedMonitor));\n\n        if (!imageRef)\n            return DUPL_RETURN_ERROR_EXPECTED; // this happens when the monitors change.\n\n        auto width = CGImageGetWidth(imageRef);\n        auto height = CGImageGetHeight(imageRef);\n\n        auto prov = CGImageGetDataProvider(imageRef);\n        if (!prov) {\n            CGImageRelease(imageRef);\n            return DUPL_RETURN_ERROR_EXPECTED;\n        }\n        auto bytesperrow = CGImageGetBytesPerRow(imageRef);\n        auto bitsperpixel = CGImageGetBitsPerPixel(imageRef);\n        // right now only support full 32 bit images.. Most desktops should run this as its the most efficent\n        assert(bitsperpixel == PixelStride * 8);\n\n        auto rawdatas = CGDataProviderCopyData(prov);\n        auto buf = CFDataGetBytePtr(rawdatas);\n\n        ImageRect ret = {0};\n        ret.left = 0;\n        ret.top = 0;\n        ret.bottom = Height(SelectedMonitor);\n        ret.right = Width(SelectedMonitor);\n        auto rowstride = PixelStride * Width(SelectedMonitor);\n        auto startbuf = buf + ((OffsetX(SelectedMonitor) - OffsetX(curentmonitorinfo) )*PixelStride);//advance to the start of this image\n        startbuf += (OffsetY(SelectedMonitor) *  bytesperrow);\n\n        if (Data->ScreenCaptureData.OnNewFrame && !Data->ScreenCaptureData.OnFrameChanged) {\n            auto wholeimg = Create(ret, PixelStride, static_cast<int>(bytesperrow) - rowstride, startbuf);\n            Data->ScreenCaptureData.OnNewFrame(wholeimg, SelectedMonitor);\n        }\n        else {\n            auto startdst = NewImageBuffer.get();\n            if (rowstride == static_cast<int>(bytesperrow)) { // no need for multiple calls, there is no padding here\n                memcpy(startdst, buf, rowstride * Height(SelectedMonitor));\n            }\n            else {\n                for (auto i = 0; i < Height(SelectedMonitor); i++) {\n                    memcpy(startdst + (i * rowstride), (startbuf + (i * bytesperrow)) , rowstride);\n                }\n            }\n            ProcessCapture(Data->ScreenCaptureData, *this, SelectedMonitor, ret);\n        }\n\n        CFRelease(rawdatas);\n        CGImageRelease(imageRef);\n        return Ret;\n    }\n    DUPL_RETURN CGFrameProcessor::ProcessFrame(const Window &window)\n    {\n\n        auto Ret = DUPL_RETURN_SUCCESS;\n\n        auto imageRef = CGWindowListCreateImage(CGRectNull, kCGWindowListOptionIncludingWindow, static_cast<uint32_t>(window.Handle),\n                                                kCGWindowImageBoundsIgnoreFraming);\n        if (!imageRef)\n            return DUPL_RETURN_ERROR_EXPECTED; // this happens when the monitors change.\n\n        auto width = CGImageGetWidth(imageRef);\n        auto height = CGImageGetHeight(imageRef);\n\n        if (width != window.Size.x || height != window.Size.y) {\n            CGImageRelease(imageRef);\n            return DUPL_RETURN_ERROR_EXPECTED; // this happens when the window sizes change.\n        }\n        auto prov = CGImageGetDataProvider(imageRef);\n        if (!prov) {\n            CGImageRelease(imageRef);\n            return DUPL_RETURN_ERROR_EXPECTED;\n        }\n        auto bytesperrow = CGImageGetBytesPerRow(imageRef);\n        auto bitsperpixel = CGImageGetBitsPerPixel(imageRef);\n        // right now only support full 32 bit images.. Most desktops should run this as its the most efficent\n        assert(bitsperpixel == PixelStride * 8);\n\n        auto rawdatas = CGDataProviderCopyData(prov);\n        auto buf = CFDataGetBytePtr(rawdatas);\n\n        auto datalen = width * height * PixelStride;\n        ImageRect ret;\n        ret.left = ret.top = 0;\n        ret.right = width;\n        ret.bottom = height;\n        if (Data->WindowCaptureData.OnNewFrame && !Data->WindowCaptureData.OnFrameChanged) {\n\n            auto wholeimg = SL::Screen_Capture::Create(ret, PixelStride, bytesperrow - PixelStride * width, buf);\n            Data->WindowCaptureData.OnNewFrame(wholeimg, window);\n        }\n        else {\n            if (bytesperrow == PixelStride * width) {\n                // most efficent, can be done in a single memcpy\n                memcpy(NewImageBuffer.get(), buf, datalen);\n            }\n            else {\n                // for loop needed to copy each row\n                auto dst = NewImageBuffer.get();\n                auto src = buf;\n                for (auto h = 0; h < height; h++) {\n                    memcpy(dst, src, PixelStride * width);\n                    dst += PixelStride * width;\n                    src += bytesperrow;\n                }\n            }\n\n            ProcessCapture(Data->WindowCaptureData, *this, window, ret);\n        }\n        CFRelease(rawdatas);\n        CGImageRelease(imageRef);\n        return Ret;\n    }\n}\n}\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/ios/GetMonitors.cpp",
    "content": "#include \"ScreenCapture.h\"\n#include \"SCCommon.h\"\n#include <ApplicationServices/ApplicationServices.h>\n\n\nnamespace SL{\n    namespace Screen_Capture{\n        \n        std::vector<Monitor> GetMonitors() {\n            std::vector<Monitor> ret;\n            std::vector<CGDirectDisplayID> displays;\n            CGDisplayCount count=0;\n            //get count\n            CGGetActiveDisplayList(0, 0, &count);\n            displays.resize(count);\n    \n            CGGetActiveDisplayList(count, displays.data(), &count);\n            for(auto  i = 0; i < count; i++) {\n                //only include non-mirrored displays\n                if(CGDisplayMirrorsDisplay(displays[i]) == kCGNullDirectDisplay){\n                    \n                    auto dismode =CGDisplayCopyDisplayMode(displays[i]);\n                    \n                    auto width = CGDisplayModeGetPixelWidth(dismode);\n                    auto height = CGDisplayModeGetPixelHeight(dismode);\n                    CGDisplayModeRelease(dismode);\n                    auto r = CGDisplayBounds(displays[i]);\n                    auto scale = static_cast<float>(width)/static_cast<float>(r.size.width);\n                    auto name = std::string(\"Monitor \") + std::to_string(displays[i]);\n                    ret.push_back(CreateMonitor(i, displays[i],height,width, int(r.origin.x), int(r.origin.y), name, scale));\n                }\n            }\n            return ret;\n\n        }\n    }\n}\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/ios/GetWindows.cpp",
    "content": "#include \"ScreenCapture.h\"\n#include \"SCCommon.h\"\n#include <algorithm>\n#include <string>\n#include \"TargetConditionals.h\"\n#include <ApplicationServices/ApplicationServices.h>\n#include <iostream>\n\nnamespace SL\n{\nnamespace Screen_Capture\n{\n\n    std::vector<Window> GetWindows()\n    {\n        CGDisplayCount count=0;\n        CGGetActiveDisplayList(0, 0, &count);\n        std::vector<CGDirectDisplayID> displays;\n        displays.resize(count);\n        CGGetActiveDisplayList(count, displays.data(), &count);\n        auto xscale=1.0f;\n        auto yscale = 1.0f;\n        \n        for(auto  i = 0; i < count; i++) {\n            //only include non-mirrored displays\n            if(CGDisplayMirrorsDisplay(displays[i]) == kCGNullDirectDisplay){\n                \n                auto dismode =CGDisplayCopyDisplayMode(displays[i]);\n                auto scaledsize = CGDisplayBounds(displays[i]);\n            \n                auto pixelwidth = CGDisplayModeGetPixelWidth(dismode);\n                auto pixelheight = CGDisplayModeGetPixelHeight(dismode);\n                \n                CGDisplayModeRelease(dismode);\n                \n                if(scaledsize.size.width !=pixelwidth){//scaling going on!\n                    xscale = static_cast<float>(pixelwidth)/static_cast<float>(scaledsize.size.width);\n                }\n                if(scaledsize.size.height !=pixelheight){//scaling going on!\n                    yscale = static_cast<float>(pixelheight)/static_cast<float>(scaledsize.size.height);\n                }\n                break;\n            }\n        }\n        \n        auto windowList = CGWindowListCopyWindowInfo(kCGWindowListOptionOnScreenOnly, kCGNullWindowID);\n        std::vector<Window> ret;\n        CFIndex numWindows = CFArrayGetCount(windowList );\n   \n        for( int i = 0; i < (int)numWindows; i++ ) {\n            char buffer[400] ={0};\n            uint32_t windowid=0;\n            auto dict = static_cast<CFDictionaryRef>(CFArrayGetValueAtIndex(windowList, i));\n            auto cfwindowname = static_cast<CFStringRef>(CFDictionaryGetValue(dict, kCGWindowName));\n            CFStringGetCString(cfwindowname, buffer, 400, kCFStringEncodingUTF8);\n            std::string windowname=buffer;\n\n            Window w;\n            CFNumberGetValue(static_cast<CFNumberRef>(CFDictionaryGetValue(dict, kCGWindowNumber)), kCFNumberIntType, &windowid);\n            w.Handle = static_cast<size_t>(windowid);\n               \n            auto dims =static_cast<CFDictionaryRef>(CFDictionaryGetValue(dict,kCGWindowBounds));\n            CGRect rect;\n            CGRectMakeWithDictionaryRepresentation(dims, &rect);\n            w.Position.x = static_cast<int>(rect.origin.x);\n            w.Position.y = static_cast<int>(rect.origin.y);\n                \n            w.Size.x = static_cast<int>(rect.size.width * xscale);\n            w.Size.y = static_cast<int>(rect.size.height* yscale);\n            memcpy(w.Name, windowname.c_str(), windowname.size() + 1);\n            ret.push_back(w);\n        }\n        CFRelease(windowList);\n        return ret;\n    }\n}\n}\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/ios/NSMouseCapture.m",
    "content": "//\n//  NSMouseCapture.m\n//  Screen_Capture\n//\n//  Created by scott lee on 1/11/17.\n//\n//\n\n#import <Foundation/Foundation.h>\n#import <appkit/appkit.h>\n#import \"NSMouseCapture.h\"\n\nvoid SLScreen_Capture_InitMouseCapture(){\n    [NSApplication sharedApplication];\n}\nCGImageRef SLScreen_Capture_GetCurrentMouseImage(){\n    CGImageRef img=NULL;\n \n    @autoreleasepool {\n        NSCursor *cur = [NSCursor currentSystemCursor];\n        if(cur==nil) return img;\n        NSImage *overlay    =  [cur image];\n        CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)[overlay TIFFRepresentation], NULL);\n        img = CGImageSourceCreateImageAtIndex(source, 0, NULL);\n        CFRelease(source);\n    }\n \n    return img;\n} "
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/ios/NSMouseProcessor.cpp",
    "content": "#include \"NSMouseProcessor.h\"\n#include \"NSMouseCapture.h\"\n#include <iostream>\n\nnamespace SL {\n    namespace Screen_Capture {\n        \n  \n        DUPL_RETURN NSMouseProcessor::Init(std::shared_ptr<Thread_Data> data) {\n            auto ret = DUPL_RETURN::DUPL_RETURN_SUCCESS;\n            Data = data;\n            SLScreen_Capture_InitMouseCapture();\n            return ret;\n        }\n        //\n        // Process a given frame and its metadata\n        //\n        \n        DUPL_RETURN NSMouseProcessor::ProcessFrame()\n        {\n            auto Ret = DUPL_RETURN_SUCCESS;\n        \n            if(Data->ScreenCaptureData.OnMouseChanged || Data->WindowCaptureData.OnMouseChanged ){\n                auto mouseev = CGEventCreate(NULL);\n                auto loc = CGEventGetLocation(mouseev);\n                CFRelease(mouseev);\n                \n                auto imageRef = SLScreen_Capture_GetCurrentMouseImage();\n                if(imageRef==NULL) return Ret;\n                auto width = CGImageGetWidth(imageRef);\n                auto height = CGImageGetHeight(imageRef);\n                \n                auto prov = CGImageGetDataProvider(imageRef);\n\n                auto rawdatas= CGDataProviderCopyData(prov);\n                auto buf = CFDataGetBytePtr(rawdatas);\n                auto datalen = CFDataGetLength(rawdatas);\n                if(datalen > ImageBufferSize){\n                    NewImageBuffer = std::make_unique<unsigned char[]>(datalen);\n                    OldImageBuffer= std::make_unique<unsigned char[]>(datalen);\n                }\n                \n                memcpy(NewImageBuffer.get(),buf, datalen);\n                CFRelease(rawdatas);\n               \n               //this is not needed. It is freed when the image is released\n                //CGDataProviderRelease(prov);\n               \n                CGImageRelease(imageRef);\n                \n                ImageRect imgrect;\n                imgrect.left =  imgrect.top=0;\n                imgrect.right =width;\n                imgrect.bottom = height;\n                auto wholeimgfirst = Create(imgrect, PixelStride, 0, NewImageBuffer.get());\n                \n              \n                auto lastx = static_cast<int>(loc.x);\n                auto lasty = static_cast<int>(loc.y);\n                    //if the mouse image is different, send the new image and swap the data\n         \n                    if (memcmp(NewImageBuffer.get(), OldImageBuffer.get(), datalen) != 0) {\n                        if(Data->ScreenCaptureData.OnMouseChanged){\n                           Data->ScreenCaptureData.OnMouseChanged(&wholeimgfirst, Point{ lastx, lasty });\n                        }\n                        if(Data->WindowCaptureData.OnMouseChanged){\n                            Data->WindowCaptureData.OnMouseChanged(&wholeimgfirst, Point{ lastx, lasty });\n                        }\n                        std::swap(NewImageBuffer, OldImageBuffer);\n                        \n                    }\n                    else if(Last_x != lastx || Last_y != lasty){\n                        if(Data->ScreenCaptureData.OnMouseChanged){\n                            Data->ScreenCaptureData.OnMouseChanged(nullptr, Point{ lastx, lasty });\n                        }\n                        if(Data->WindowCaptureData.OnMouseChanged){\n                            Data->WindowCaptureData.OnMouseChanged(nullptr, Point{ lastx, lasty });\n                        }\n                    }\n                Last_x = lastx;\n                Last_y = lasty;\n          \n            }\n            return Ret;\n        }\n\n    }\n}\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/ios/ThreadRunner.cpp",
    "content": "#include \"ScreenCapture.h\"\n#include \"ThreadManager.h\"\n#include \"CGFrameProcessor.h\"\n#include \"NSMouseProcessor.h\"\n\nnamespace SL{\n    namespace Screen_Capture{\n        void RunCaptureMouse(std::shared_ptr<Thread_Data> data) {\n            TryCaptureMouse<NSMouseProcessor>(data);\n        }\n        void RunCaptureMonitor(std::shared_ptr<Thread_Data> data, Monitor monitor){\n            TryCaptureMonitor<CGFrameProcessor>(data, monitor);\n        }\n        void RunCaptureWindow(std::shared_ptr<Thread_Data> data, Window window){\n            TryCaptureWindow<CGFrameProcessor>(data, window);\n        }\n    }\n}\n\n  \n    \n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/linux/GetMonitors.cpp",
    "content": "#include \"ScreenCapture.h\"\n#include \"SCCommon.h\"\n#include <X11/Xlib.h>\n#include <X11/extensions/Xinerama.h>\n\nnamespace SL\n{\nnamespace Screen_Capture\n{\n\n    std::vector<Monitor> GetMonitors()\n    {\n        std::vector<Monitor> ret;\n\n        Display* display = XOpenDisplay(NULL);\n        int nmonitors = 0;\n        XineramaScreenInfo* screen = XineramaQueryScreens(display, &nmonitors);\n        ret.reserve(nmonitors);\n       \n        for(auto i = 0; i < nmonitors; i++) {\n\n            auto name = std::string(\"Display \") + std::to_string(i);\n            ret.push_back(CreateMonitor(\n                i, screen[i].screen_number, screen[i].height, screen[i].width, screen[i].x_org, screen[i].y_org, name, 1.0f));\n        }\n        XFree(screen);\n\n        XCloseDisplay(display);\n        return ret;\n    }\n}\n}\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/linux/GetWindows.cpp",
    "content": "#include \"ScreenCapture.h\"\n#include \"SCCommon.h\"\n#include <X11/Xlib.h>\n#include <algorithm>\n#include <string>\n\nnamespace SL\n{\nnamespace Screen_Capture\n{\n\n    void AddWindow(Display* display, XID& window, std::vector<Window>& wnd)\n    {\n        std::string name;\n        char* n = NULL;\n        if(XFetchName(display, window, &n) > 0) {\n            name = n;\n            XFree(n);\n        } \n        Window w;\n        w.Handle = reinterpret_cast<size_t>(window);\n        XWindowAttributes wndattr;\n        XGetWindowAttributes(display, window, &wndattr);\n        w.Position = Point{ wndattr.x, wndattr.y };\n        w.Size = Point{ wndattr.width, wndattr.height };\n\n        memcpy(w.Name, name.c_str(), name.size() + 1);\n        wnd.push_back(w);\n    }\n\n    std::vector<Window> GetWindows()\n    {\n        auto* display = XOpenDisplay(NULL);\n        Atom a = XInternAtom(display, \"_NET_CLIENT_LIST\", true);\n        Atom actualType;\n        int format;\n        unsigned long numItems, bytesAfter;\n        unsigned char* data = 0;\n        int status = XGetWindowProperty(display,\n                                        XDefaultRootWindow(display),\n                                        a,\n                                        0L,\n                                        (~0L),\n                                        false,\n                                        AnyPropertyType,\n                                        &actualType,\n                                        &format,\n                                        &numItems,\n                                        &bytesAfter,\n                                        &data);\n        std::vector<Window> ret;\n        if(status >= Success && numItems) {\n            auto array = (XID*)data;\n            for(decltype(numItems) k = 0; k < numItems; k++) {\n                auto w = array[k];\n                AddWindow(display, w, ret);\n            }\n            XFree(data);\n        }\n        XCloseDisplay(display);\n        return ret;\n    }\n}\n}"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/linux/ThreadRunner.cpp",
    "content": "#include \"ScreenCapture.h\"\n#include \"ThreadManager.h\"\n#include \"X11FrameProcessor.h\"\n#include \"X11MouseProcessor.h\"\n\nnamespace SL{\n    namespace Screen_Capture{\t\n        void RunCaptureMouse(std::shared_ptr<Thread_Data> data) {\n            TryCaptureMouse<X11MouseProcessor>(data);\n        }\n        void RunCaptureMonitor(std::shared_ptr<Thread_Data> data, Monitor monitor){\n            TryCaptureMonitor<X11FrameProcessor>(data, monitor);\n        }\n        void RunCaptureWindow(std::shared_ptr<Thread_Data> data, Window window){\n            TryCaptureWindow<X11FrameProcessor>(data, window);\n        }\n    }\n}\n\n  \n    \n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/linux/X11FrameProcessor.cpp",
    "content": "#include \"X11FrameProcessor.h\"\n#include <X11/Xutil.h>\n#include <X11/extensions/Xinerama.h>\n#include <assert.h>\n#include <vector>\n\nnamespace SL\n{\nnamespace Screen_Capture\n{\n    X11FrameProcessor::X11FrameProcessor()\n    {\n    }\n\n    X11FrameProcessor::~X11FrameProcessor()\n    {\n\n        if(ShmInfo) {\n            shmdt(ShmInfo->shmaddr);\n            shmctl(ShmInfo->shmid, IPC_RMID, 0);\n            XShmDetach(SelectedDisplay, ShmInfo.get());\n        }\n        if(Image) {\n            XDestroyImage(Image);\n        }\n        if(SelectedDisplay) {\n            XCloseDisplay(SelectedDisplay);\n        }\n    }\n    \n    DUPL_RETURN X11FrameProcessor::Init(std::shared_ptr<Thread_Data> data, const Window& selectedwindow){\n        \n        auto ret = DUPL_RETURN::DUPL_RETURN_SUCCESS;\n        Data = data; \n        SelectedDisplay = XOpenDisplay(NULL);\n        SelectedWindow = selectedwindow.Handle;\n        if(!SelectedDisplay) {\n            return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED;\n        }\n        int scr = XDefaultScreen(SelectedDisplay);\n\n        ShmInfo = std::make_unique<XShmSegmentInfo>();\n\n        Image = XShmCreateImage(SelectedDisplay,\n                                DefaultVisual(SelectedDisplay, scr),\n                                DefaultDepth(SelectedDisplay, scr),\n                                ZPixmap,\n                                NULL,\n                                ShmInfo.get(),\n                                selectedwindow.Size.x,\n                                selectedwindow.Size.y);\n        ShmInfo->shmid = shmget(IPC_PRIVATE, Image->bytes_per_line * Image->height, IPC_CREAT | 0777);\n\n        ShmInfo->readOnly = False;\n        ShmInfo->shmaddr = Image->data = (char*)shmat(ShmInfo->shmid, 0, 0);\n\n        XShmAttach(SelectedDisplay, ShmInfo.get());\n\n        return ret;\n    }\n    DUPL_RETURN X11FrameProcessor::Init(std::shared_ptr<Thread_Data> data, Monitor& monitor)\n    {\n        auto ret = DUPL_RETURN::DUPL_RETURN_SUCCESS;\n        Data = data;\n        SelectedMonitor = monitor;\n        SelectedDisplay = XOpenDisplay(NULL);\n        if(!SelectedDisplay) {\n            return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED;\n        }\n        int scr = XDefaultScreen(SelectedDisplay);\n\n        ShmInfo = std::make_unique<XShmSegmentInfo>();\n\n        Image = XShmCreateImage(SelectedDisplay,\n                                DefaultVisual(SelectedDisplay, scr),\n                                DefaultDepth(SelectedDisplay, scr),\n                                ZPixmap,\n                                NULL,\n                                ShmInfo.get(),\n                                Width(SelectedMonitor),\n                                Height(SelectedMonitor));\n        ShmInfo->shmid = shmget(IPC_PRIVATE, Image->bytes_per_line * Image->height, IPC_CREAT | 0777);\n\n        ShmInfo->readOnly = False;\n        ShmInfo->shmaddr = Image->data = (char*)shmat(ShmInfo->shmid, 0, 0);\n\n        XShmAttach(SelectedDisplay, ShmInfo.get());\n\n        return ret;\n    }\n \n    DUPL_RETURN X11FrameProcessor::ProcessFrame(const Monitor& curentmonitorinfo)\n    {        \n        auto Ret = DUPL_RETURN_SUCCESS;\n        ImageRect ret;\n        ret.left = ret.top = 0;\n        ret.right = Width(SelectedMonitor);\n        ret.bottom = Height(SelectedMonitor);\n        if(!XShmGetImage(SelectedDisplay,\n                         RootWindow(SelectedDisplay, DefaultScreen(SelectedDisplay)),\n                         Image,\n                         OffsetX(SelectedMonitor),\n                         OffsetY(SelectedMonitor),\n                         AllPlanes)) {\n            return DUPL_RETURN_ERROR_EXPECTED;\n        }\n\n        if(Data->ScreenCaptureData.OnNewFrame && !Data->ScreenCaptureData.OnFrameChanged) {\n\n            auto wholeimg = Create(ret, PixelStride, 0, reinterpret_cast<unsigned char*>(Image->data));\n            Data->ScreenCaptureData.OnNewFrame(wholeimg, SelectedMonitor);\n        } else {\n            memcpy(NewImageBuffer.get(), Image->data, PixelStride * ret.right * ret.bottom);\n            ProcessCapture(Data->ScreenCaptureData, *this, SelectedMonitor, ret);\n        }\n        return Ret;\n    }\n    DUPL_RETURN X11FrameProcessor::ProcessFrame(Window& selectedwindow){\n        \n        auto Ret = DUPL_RETURN_SUCCESS;\n        ImageRect ret;\n        ret.left = ret.top = 0;\n        ret.right = selectedwindow.Size.x;\n        ret.bottom = selectedwindow.Size.y;\n        XWindowAttributes wndattr;\n        if(XGetWindowAttributes(SelectedDisplay, SelectedWindow, &wndattr) ==0){\n            return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED;//window might not be valid any more\n        }\n        if(wndattr.width != ret.right || wndattr.height != ret.bottom){\n            return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED;//window size changed. This will rebuild everything\n        }\n        if(!XShmGetImage(SelectedDisplay,\n                         selectedwindow.Handle,\n                         Image,\n                         0,\n                         0,\n                         AllPlanes)) {\n            return DUPL_RETURN_ERROR_EXPECTED;\n        }\n\n        if(Data->WindowCaptureData.OnNewFrame && !Data->WindowCaptureData.OnFrameChanged) {\n\n            auto wholeimg = Create(ret, PixelStride, 0, reinterpret_cast<unsigned char*>(Image->data));\n            Data->WindowCaptureData.OnNewFrame(wholeimg, selectedwindow);\n        } else {\n            memcpy(NewImageBuffer.get(), Image->data, PixelStride * ret.right * ret.bottom);\n            ProcessCapture(Data->WindowCaptureData, *this, selectedwindow, ret);\n        }\n        return Ret;\n    }\n}\n}"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/linux/X11MouseProcessor.cpp",
    "content": "#include \"X11MouseProcessor.h\"\n\n#include <assert.h>\n#include <cstring>\n\nnamespace SL {\n    namespace Screen_Capture {\n        \n        X11MouseProcessor::X11MouseProcessor()\n        {\n            \n          \n        }\n\n        X11MouseProcessor::~X11MouseProcessor()\n        {\n            if (SelectedDisplay) {\n                XCloseDisplay(SelectedDisplay);\n            }\n        }\n        DUPL_RETURN X11MouseProcessor::Init(std::shared_ptr<Thread_Data> data) {\n            auto ret = DUPL_RETURN::DUPL_RETURN_SUCCESS;\n            Data = data;\n            SelectedDisplay = XOpenDisplay(NULL);\n            if (!SelectedDisplay) {\n                return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED;\n            }\n            RootWindow = DefaultRootWindow(SelectedDisplay);\n            if (!RootWindow) {\n                return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED;\n            }\n            return ret;\n        }\n        //\n        // Process a given frame and its metadata\n        //\n        DUPL_RETURN X11MouseProcessor::ProcessFrame()\n        {\n            auto Ret = DUPL_RETURN_SUCCESS;\n    \n            auto img = XFixesGetCursorImage(SelectedDisplay);\n       \n            if (sizeof(img->pixels[0]) == 8) {//if the pixelstride is 64 bits.. scale down to 32bits\n                auto pixels = (int *)img->pixels;\n                for (auto i = 0; i < img->width * img->height; ++i) {\n                    pixels[i] = pixels[i * 2];\n                }\n            }\n            ImageRect imgrect;\n            imgrect.left = imgrect.top = 0;\n            imgrect.right = img->width;\n            imgrect.bottom = img->height;\n            auto newsize = PixelStride*imgrect.right*imgrect.bottom;\n            if(newsize>ImageBufferSize){\n                NewImageBuffer = std::make_unique<unsigned char[]>(PixelStride*imgrect.right*imgrect.bottom);\n                OldImageBuffer=std::make_unique<unsigned char[]>(PixelStride*imgrect.right*imgrect.bottom);\n            }\n            \n            memcpy(NewImageBuffer.get(), img->pixels, newsize);\n\n                // Get the mouse cursor position\n            int x, y, root_x, root_y = 0;\n            unsigned int mask = 0;\n            XID child_win, root_win;\n            XQueryPointer(SelectedDisplay, RootWindow, &child_win, &root_win, &root_x, &root_y, &x, &y, &mask);\n            x -= img->xhot;\n            y -= img->yhot;\n                \n            XFree(img);\n\n\n            if (Data->ScreenCaptureData.OnMouseChanged) {\n\n                auto wholeimg = Create(imgrect, PixelStride, 0, OldImageBuffer.get());\n                    \n                //if the mouse image is different, send the new image and swap the data \n                if (memcmp(NewImageBuffer.get(), OldImageBuffer.get(), newsize) != 0) {\n                    Data->ScreenCaptureData.OnMouseChanged(&wholeimg, Point{x, y});\n                    std::swap(NewImageBuffer, OldImageBuffer);\n                }\n                else if(Last_x != x || Last_y != y){\n                    Data->ScreenCaptureData.OnMouseChanged(nullptr, Point{x, y});\n                }\n                Last_x = x;\n                Last_y = y;\n            }\n            return Ret;\n        }\n\n    }\n}"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/windows/DXFrameProcessor.cpp",
    "content": "#include \"DXFrameProcessor.h\"\n#include <atomic>\n#include <iostream>\n#include <memory>\n#include <mutex>\n#include <string>\n\n#if (_MSC_VER >= 1700) && defined(_USING_V110_SDK71_)\nnamespace SL {\nnamespace Screen_Capture {\n\n    DUPL_RETURN DXFrameProcessor::Init(std::shared_ptr<Thread_Data> data) { return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED; }\n    DUPL_RETURN DXFrameProcessor::ProcessFrame() { return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED; }\n\n} // namespace Screen_Capture\n} // namespace SL\n#else\n\nnamespace SL {\nnamespace Screen_Capture {\n    struct DX_RESOURCES {\n        Microsoft::WRL::ComPtr<ID3D11Device> Device;\n        Microsoft::WRL::ComPtr<ID3D11DeviceContext> DeviceContext;\n    };\n    struct DUPLE_RESOURCES {\n        Microsoft::WRL::ComPtr<IDXGIOutputDuplication> OutputDuplication;\n        DXGI_OUTPUT_DESC OutputDesc;\n        UINT Output;\n    };\n\n    // These are the errors we expect from general Dxgi API due to a transition\n    HRESULT SystemTransitionsExpectedErrors[] = {\n        DXGI_ERROR_DEVICE_REMOVED, DXGI_ERROR_ACCESS_LOST, static_cast<HRESULT>(WAIT_ABANDONED),\n        S_OK // Terminate list with zero valued HRESULT\n    };\n\n    // These are the errors we expect from IDXGIOutput1::DuplicateOutput due to a transition\n    HRESULT CreateDuplicationExpectedErrors[] = {\n        DXGI_ERROR_DEVICE_REMOVED, static_cast<HRESULT>(E_ACCESSDENIED), DXGI_ERROR_UNSUPPORTED, DXGI_ERROR_SESSION_DISCONNECTED,\n        S_OK // Terminate list with zero valued HRESULT\n    };\n\n    // These are the errors we expect from IDXGIOutputDuplication methods due to a transition\n    HRESULT FrameInfoExpectedErrors[] = {\n        DXGI_ERROR_DEVICE_REMOVED, DXGI_ERROR_ACCESS_LOST, DXGI_ERROR_INVALID_CALL,\n        S_OK // Terminate list with zero valued HRESULT\n    };\n\n    // These are the errors we expect from IDXGIAdapter::EnumOutputs methods due to outputs becoming stale during a transition\n    HRESULT EnumOutputsExpectedErrors[] = {\n        DXGI_ERROR_NOT_FOUND,\n        S_OK // Terminate list with zero valued HRESULT\n    };\n\n    DUPL_RETURN ProcessFailure(ID3D11Device *Device, LPCWSTR Str, LPCWSTR Title, HRESULT hr, HRESULT *ExpectedErrors = nullptr)\n    {\n        HRESULT TranslatedHr;\n#if defined _DEBUG || !defined NDEBUG\n        std::wcout << Str << \"\\t\" << Title << std::endl;\n#endif\n        // On an error check if the DX device is lost\n        if (Device) {\n            HRESULT DeviceRemovedReason = Device->GetDeviceRemovedReason();\n\n            switch (DeviceRemovedReason) {\n            case DXGI_ERROR_DEVICE_REMOVED:\n            case DXGI_ERROR_DEVICE_RESET:\n            case static_cast<HRESULT>(E_OUTOFMEMORY): {\n                // Our device has been stopped due to an external event on the GPU so map them all to\n                // device removed and continue processing the condition\n                TranslatedHr = DXGI_ERROR_DEVICE_REMOVED;\n                break;\n            }\n\n            case S_OK: {\n                // Device is not removed so use original error\n                TranslatedHr = hr;\n                break;\n            }\n\n            default: {\n                // Device is removed but not a error we want to remap\n                TranslatedHr = DeviceRemovedReason;\n            }\n            }\n        }\n        else {\n            TranslatedHr = hr;\n        }\n\n        // Check if this error was expected or not\n        if (ExpectedErrors) {\n            HRESULT *CurrentResult = ExpectedErrors;\n\n            while (*CurrentResult != S_OK) {\n                if (*(CurrentResult++) == TranslatedHr) {\n                    return DUPL_RETURN_ERROR_EXPECTED;\n                }\n            }\n        }\n\n        return DUPL_RETURN_ERROR_UNEXPECTED;\n    }\n\n    DUPL_RETURN Initialize(DX_RESOURCES &data)\n    {\n\n        HRESULT hr = S_OK;\n\n        // Driver types supported\n        D3D_DRIVER_TYPE DriverTypes[] = {\n            D3D_DRIVER_TYPE_HARDWARE,\n            D3D_DRIVER_TYPE_WARP,\n            D3D_DRIVER_TYPE_REFERENCE,\n        };\n        UINT NumDriverTypes = ARRAYSIZE(DriverTypes);\n\n        // Feature levels supported\n        D3D_FEATURE_LEVEL FeatureLevels[] = {D3D_FEATURE_LEVEL_11_0, D3D_FEATURE_LEVEL_10_1, D3D_FEATURE_LEVEL_10_0, D3D_FEATURE_LEVEL_9_1};\n        UINT NumFeatureLevels = ARRAYSIZE(FeatureLevels);\n\n        D3D_FEATURE_LEVEL FeatureLevel;\n\n        // Create device\n        for (UINT DriverTypeIndex = 0; DriverTypeIndex < NumDriverTypes; ++DriverTypeIndex) {\n            hr = D3D11CreateDevice(nullptr, DriverTypes[DriverTypeIndex], nullptr, 0, FeatureLevels, NumFeatureLevels, D3D11_SDK_VERSION,\n                                   data.Device.GetAddressOf(), &FeatureLevel, data.DeviceContext.GetAddressOf());\n            if (SUCCEEDED(hr)) {\n                // Device creation success, no need to loop anymore\n                break;\n            }\n        }\n        if (FAILED(hr)) {\n            return ProcessFailure(nullptr, L\"Failed to create device in InitializeDx\", L\"Error\", hr);\n        }\n\n        return DUPL_RETURN_SUCCESS;\n    }\n\n    DUPL_RETURN Initialize(DUPLE_RESOURCES &r, ID3D11Device *device, const UINT output)\n    {\n\n        // Get DXGI device\n        Microsoft::WRL::ComPtr<IDXGIDevice> DxgiDevice;\n        HRESULT hr = device->QueryInterface(__uuidof(IDXGIDevice), reinterpret_cast<void **>(DxgiDevice.GetAddressOf()));\n        if (FAILED(hr)) {\n            return ProcessFailure(nullptr, L\"Failed to QI for DXGI Device\", L\"Error\", hr);\n        }\n\n        // Get DXGI adapter\n        Microsoft::WRL::ComPtr<IDXGIAdapter> DxgiAdapter;\n        hr = DxgiDevice->GetParent(__uuidof(IDXGIAdapter), reinterpret_cast<void **>(DxgiAdapter.GetAddressOf()));\n        if (FAILED(hr)) {\n            return ProcessFailure(device, L\"Failed to get parent DXGI Adapter\", L\"Error\", hr, SystemTransitionsExpectedErrors);\n        }\n\n        // Get output\n        Microsoft::WRL::ComPtr<IDXGIOutput> DxgiOutput;\n        hr = DxgiAdapter->EnumOutputs(output, DxgiOutput.GetAddressOf());\n\n        if (FAILED(hr)) {\n            return ProcessFailure(device, L\"Failed to get specified output in DUPLICATIONMANAGER\", L\"Error\", hr, EnumOutputsExpectedErrors);\n        }\n\n        DxgiOutput->GetDesc(&r.OutputDesc);\n\n        // QI for Output 1\n        Microsoft::WRL::ComPtr<IDXGIOutput1> DxgiOutput1;\n        hr = DxgiOutput.Get()->QueryInterface(__uuidof(IDXGIOutput1), reinterpret_cast<void **>(DxgiOutput1.GetAddressOf()));\n        if (FAILED(hr)) {\n            return ProcessFailure(nullptr, L\"Failed to QI for DxgiOutput1 in DUPLICATIONMANAGER\", L\"Error\", hr);\n        }\n\n        // Create desktop duplication\n        hr = DxgiOutput1->DuplicateOutput(device, r.OutputDuplication.GetAddressOf());\n        if (FAILED(hr)) {\n            return ProcessFailure(device, L\"Failed to get duplicate output in DUPLICATIONMANAGER\", L\"Error\", hr, CreateDuplicationExpectedErrors);\n        }\n        r.Output = output;\n        return DUPL_RETURN_SUCCESS;\n    }\n    RECT ConvertRect(RECT Dirty, const DXGI_OUTPUT_DESC &DeskDesc)\n    {\n        RECT DestDirty = Dirty;\n        INT Width = DeskDesc.DesktopCoordinates.right - DeskDesc.DesktopCoordinates.left;\n        INT Height = DeskDesc.DesktopCoordinates.bottom - DeskDesc.DesktopCoordinates.top;\n\n        // Set appropriate coordinates compensated for rotation\n        switch (DeskDesc.Rotation) {\n        case DXGI_MODE_ROTATION_ROTATE90: {\n\n            DestDirty.left = Width - Dirty.bottom;\n            DestDirty.top = Dirty.left;\n            DestDirty.right = Width - Dirty.top;\n            DestDirty.bottom = Dirty.right;\n\n            break;\n        }\n        case DXGI_MODE_ROTATION_ROTATE180: {\n            DestDirty.left = Width - Dirty.right;\n            DestDirty.top = Height - Dirty.bottom;\n            DestDirty.right = Width - Dirty.left;\n            DestDirty.bottom = Height - Dirty.top;\n\n            break;\n        }\n        case DXGI_MODE_ROTATION_ROTATE270: {\n            DestDirty.left = Dirty.top;\n            DestDirty.top = Height - Dirty.right;\n            DestDirty.right = Dirty.bottom;\n            DestDirty.bottom = Height - Dirty.left;\n\n            break;\n        }\n        case DXGI_MODE_ROTATION_UNSPECIFIED:\n        case DXGI_MODE_ROTATION_IDENTITY: {\n            break;\n        }\n        default:\n            break;\n        }\n        return DestDirty;\n    }\n\n    class AquireFrameRAII {\n\n        IDXGIOutputDuplication *_DuplLock;\n        bool AquiredLock;\n        void TryRelease()\n        {\n            if (AquiredLock) {\n                auto hr = _DuplLock->ReleaseFrame();\n                if (FAILED(hr) && hr != DXGI_ERROR_WAIT_TIMEOUT) {\n                    ProcessFailure(nullptr, L\"Failed to release frame in DUPLICATIONMANAGER\", L\"Error\", hr, FrameInfoExpectedErrors);\n                }\n            }\n            AquiredLock = false;\n        }\n\n      public:\n        AquireFrameRAII(IDXGIOutputDuplication *dupl) : _DuplLock(dupl), AquiredLock(false) {}\n\n        ~AquireFrameRAII() { TryRelease(); }\n        HRESULT AcquireNextFrame(UINT TimeoutInMilliseconds, DXGI_OUTDUPL_FRAME_INFO *pFrameInfo, IDXGIResource **ppDesktopResource)\n        {\n            auto hr = _DuplLock->AcquireNextFrame(TimeoutInMilliseconds, pFrameInfo, ppDesktopResource);\n            TryRelease();\n            AquiredLock = SUCCEEDED(hr);\n            return hr;\n        }\n    };\n    class MAPPED_SUBRESOURCERAII {\n        ID3D11DeviceContext *_Context;\n        ID3D11Resource *_Resource;\n        UINT _Subresource;\n\n      public:\n        MAPPED_SUBRESOURCERAII(ID3D11DeviceContext *context) : _Context(context), _Resource(nullptr), _Subresource(0) {}\n\n        ~MAPPED_SUBRESOURCERAII() { _Context->Unmap(_Resource, _Subresource); }\n        HRESULT Map(ID3D11Resource *pResource, UINT Subresource, D3D11_MAP MapType, UINT MapFlags, D3D11_MAPPED_SUBRESOURCE *pMappedResource)\n        {\n            if (_Resource != nullptr) {\n                _Context->Unmap(_Resource, _Subresource);\n            }\n            _Resource = pResource;\n            _Subresource = Subresource;\n            return _Context->Map(_Resource, _Subresource, MapType, MapFlags, pMappedResource);\n        }\n    };\n\n    DUPL_RETURN DXFrameProcessor::Init(std::shared_ptr<Thread_Data> data, Monitor &monitor)\n    {\n        SelectedMonitor = monitor;\n        DX_RESOURCES res;\n        auto ret = Initialize(res);\n        if (ret != DUPL_RETURN_SUCCESS) {\n            return ret;\n        }\n        DUPLE_RESOURCES dupl;\n        ret = Initialize(dupl, res.Device.Get(), Id(SelectedMonitor));\n        if (ret != DUPL_RETURN_SUCCESS) {\n            return ret;\n        }\n        Device = res.Device;\n        DeviceContext = res.DeviceContext;\n        OutputDuplication = dupl.OutputDuplication;\n        OutputDesc = dupl.OutputDesc;\n        Output = dupl.Output;\n\n        Data = data;\n\n        return ret;\n    }\n\n    //\n    // Process a given frame and its metadata\n    //\n\n    DUPL_RETURN DXFrameProcessor::ProcessFrame(const Monitor &currentmonitorinfo)\n    {\n        auto Ret = DUPL_RETURN_SUCCESS;\n\n        Microsoft::WRL::ComPtr<IDXGIResource> DesktopResource;\n        DXGI_OUTDUPL_FRAME_INFO FrameInfo = {0};\n        AquireFrameRAII frame(OutputDuplication.Get());\n\n        // Get new frame\n        auto hr = frame.AcquireNextFrame(100, &FrameInfo, DesktopResource.GetAddressOf());\n        if (hr == DXGI_ERROR_WAIT_TIMEOUT) {\n            return DUPL_RETURN_SUCCESS;\n        }\n        else if (FAILED(hr)) {\n            return ProcessFailure(Device.Get(), L\"Failed to acquire next frame in DUPLICATIONMANAGER\", L\"Error\", hr, FrameInfoExpectedErrors);\n        }\n        Microsoft::WRL::ComPtr<ID3D11Texture2D> aquireddesktopimage;\n        // QI for IDXGIResource\n        hr = DesktopResource.Get()->QueryInterface(__uuidof(ID3D11Texture2D), reinterpret_cast<void **>(aquireddesktopimage.GetAddressOf()));\n        if (FAILED(hr)) {\n            return ProcessFailure(nullptr, L\"Failed to QI for ID3D11Texture2D from acquired IDXGIResource in DUPLICATIONMANAGER\", L\"Error\", hr);\n        }\n\n        if (!StagingSurf) {\n            D3D11_TEXTURE2D_DESC ThisDesc = {0};\n            aquireddesktopimage->GetDesc(&ThisDesc);\n            D3D11_TEXTURE2D_DESC StagingDesc;\n            StagingDesc = ThisDesc;\n            StagingDesc.BindFlags = 0;\n            StagingDesc.Usage = D3D11_USAGE_STAGING;\n            StagingDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;\n            StagingDesc.MiscFlags = 0;\n            StagingDesc.Height = Height(SelectedMonitor);\n            StagingDesc.Width = Width(SelectedMonitor);\n\n            hr = Device->CreateTexture2D(&StagingDesc, nullptr, StagingSurf.GetAddressOf());\n            if (FAILED(hr)) {\n                return ProcessFailure(Device.Get(), L\"Failed to create staging texture for move rects\", L\"Error\", hr,\n                                      SystemTransitionsExpectedErrors);\n            }\n        }\n        if (Width(currentmonitorinfo) == Width(SelectedMonitor) && Height(currentmonitorinfo) == Height(SelectedMonitor)) {\n            DeviceContext->CopyResource(StagingSurf.Get(), aquireddesktopimage.Get());\n        }\n        else {\n            D3D11_BOX sourceRegion;\n            sourceRegion.left = OffsetX(SelectedMonitor) - OutputDesc.DesktopCoordinates.left;\n            sourceRegion.right = sourceRegion.left + Width(SelectedMonitor);\n            sourceRegion.top = OffsetY(SelectedMonitor) + OutputDesc.DesktopCoordinates.top;\n            sourceRegion.bottom = sourceRegion.top + Height(SelectedMonitor);\n            sourceRegion.front = 0;\n            sourceRegion.back = 1;\n            DeviceContext->CopySubresourceRegion(StagingSurf.Get(), 0, 0, 0, 0, aquireddesktopimage.Get(), 0, &sourceRegion);\n        }\n\n        D3D11_MAPPED_SUBRESOURCE MappingDesc = {0};\n        MAPPED_SUBRESOURCERAII mappedresrouce(DeviceContext.Get());\n        hr = mappedresrouce.Map(StagingSurf.Get(), 0, D3D11_MAP_READ, 0, &MappingDesc);\n        // Get the data\n        if (MappingDesc.pData == NULL) {\n            return ProcessFailure(Device.Get(),\n                                  L\"DrawSurface_GetPixelColor: Could not read the pixel color because the mapped subresource returned NULL\", L\"Error\",\n                                  hr, SystemTransitionsExpectedErrors);\n        }\n\n\t\tImageRect ret;\n        ret.left = 0;\n        ret.top = 0;\n        ret.bottom = Height(SelectedMonitor);\n        ret.right = Width(SelectedMonitor);\n        auto startsrc = reinterpret_cast<unsigned char *>(MappingDesc.pData);\n\n        auto rowstride = PixelStride * Width(SelectedMonitor);\n\n        if (Data->ScreenCaptureData.OnNewFrame && !Data->ScreenCaptureData.OnFrameChanged) {\n            auto wholeimg = Create(ret, PixelStride, static_cast<int>(MappingDesc.RowPitch) - rowstride, startsrc);\n            Data->ScreenCaptureData.OnNewFrame(wholeimg, SelectedMonitor);\n        }\n        else {\n            auto startdst = NewImageBuffer.get();\n            if (rowstride == static_cast<int>(MappingDesc.RowPitch)) { // no need for multiple calls, there is no padding here\n                memcpy(startdst, startsrc, rowstride * Height(SelectedMonitor));\n            }\n            else {\n                for (auto i = 0; i < Height(SelectedMonitor); i++) {\n                    memcpy(startdst + (i * rowstride), startsrc + (i * MappingDesc.RowPitch), rowstride);\n                }\n            }\n            ProcessCapture(Data->ScreenCaptureData, *this, SelectedMonitor, ret);\n        }\n        return Ret;\n    }\n} // namespace Screen_Capture\n} // namespace SL\n\n#endif\n"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/windows/GDIFrameProcessor.cpp",
    "content": "#include \"GDIFrameProcessor.h\"\n#include <Dwmapi.h>\n\nnamespace SL {\nnamespace Screen_Capture {\n\n    DUPL_RETURN GDIFrameProcessor::Init(std::shared_ptr<Thread_Data> data, const Monitor &monitor)\n    {\n        SelectedMonitor = monitor;\n        auto Ret = DUPL_RETURN_SUCCESS;\n\n        MonitorDC.DC = CreateDCA(Name(SelectedMonitor), NULL, NULL, NULL);\n        CaptureDC.DC = CreateCompatibleDC(MonitorDC.DC);\n        CaptureBMP.Bitmap = CreateCompatibleBitmap(MonitorDC.DC, Width(SelectedMonitor), Height(SelectedMonitor));\n\n        if (!MonitorDC.DC || !CaptureDC.DC || !CaptureBMP.Bitmap) {\n            return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED;\n        }\n\n        Data = data;\n        return Ret;\n    }\n    DUPL_RETURN GDIFrameProcessor::Init(std::shared_ptr<Thread_Data> data, const Window &selectedwindow)\n    {\n        // this is needed to fix AERO BitBlt capturing issues\n        ANIMATIONINFO str;\n        str.cbSize = sizeof(str);\n        str.iMinAnimate = 0;\n        SystemParametersInfo(SPI_SETANIMATION, sizeof(str), (void *)&str, SPIF_UPDATEINIFILE | SPIF_SENDCHANGE);\n\n        SelectedWindow = reinterpret_cast<HWND>(selectedwindow.Handle);\n        auto Ret = DUPL_RETURN_SUCCESS;\n\n        MonitorDC.DC = GetWindowDC(SelectedWindow);\n        CaptureDC.DC = CreateCompatibleDC(MonitorDC.DC);\n\n        CaptureBMP.Bitmap = CreateCompatibleBitmap(MonitorDC.DC, selectedwindow.Size.x, selectedwindow.Size.y);\n\n        if (!MonitorDC.DC || !CaptureDC.DC || !CaptureBMP.Bitmap) {\n            return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED;\n        }\n\n        Data = data;\n        return Ret;\n    }\n    DUPL_RETURN GDIFrameProcessor::ProcessFrame(const Monitor &currentmonitorinfo)\n    {\n\n        auto Ret = DUPL_RETURN_SUCCESS;\n\n        ImageRect ret;\n        ret.left = ret.top = 0;\n        ret.bottom = Height(SelectedMonitor);\n        ret.right = Width(SelectedMonitor);\n\n        // Selecting an object into the specified DC\n        auto originalBmp = SelectObject(CaptureDC.DC, CaptureBMP.Bitmap);\n\n        if (BitBlt(CaptureDC.DC, 0, 0, ret.right, ret.bottom, MonitorDC.DC, 0, 0, SRCCOPY | CAPTUREBLT) == FALSE) {\n            // if the screen cannot be captured, return\n            SelectObject(CaptureDC.DC, originalBmp);\n            return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED; // likely a permission issue\n        }\n        else {\n\n            BITMAPINFOHEADER bi;\n            memset(&bi, 0, sizeof(bi));\n\n            bi.biSize = sizeof(BITMAPINFOHEADER);\n\n            bi.biWidth = ret.right;\n            bi.biHeight = -ret.bottom;\n            bi.biPlanes = 1;\n            bi.biBitCount = PixelStride * 8; // always 32 bits damnit!!!\n            bi.biCompression = BI_RGB;\n            bi.biSizeImage = ((ret.right * bi.biBitCount + 31) / (PixelStride * 8)) * PixelStride * ret.bottom;\n            GetDIBits(MonitorDC.DC, CaptureBMP.Bitmap, 0, (UINT)ret.bottom, NewImageBuffer.get(), (BITMAPINFO *)&bi, DIB_RGB_COLORS);\n            SelectObject(CaptureDC.DC, originalBmp);\n            ProcessCapture(Data->ScreenCaptureData, *this, currentmonitorinfo, ret);\n        }\n\n        return Ret;\n    }\n\n    DUPL_RETURN GDIFrameProcessor::ProcessFrame(Window &selectedwindow)\n    {\n        auto Ret = DUPL_RETURN_SUCCESS;\n        auto windowrect = SL::Screen_Capture::GetWindowRect(SelectedWindow);\n        ImageRect ret;\n        memset(&ret, 0, sizeof(ret));\n        ret.bottom = windowrect.ClientRect.bottom;\n        ret.left = windowrect.ClientRect.left;\n        ret.right = windowrect.ClientRect.right;\n        ret.top = windowrect.ClientRect.top;\n        selectedwindow.Position.x = windowrect.ClientRect.left;\n        selectedwindow.Position.y = windowrect.ClientRect.top;\n\n        if (!IsWindow(SelectedWindow) || selectedwindow.Size.x != Width(ret) || selectedwindow.Size.y != Height(ret)) {\n            return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED; // window size changed. This will rebuild everything\n        }\n\n        // Selecting an object into the specified DC\n        auto originalBmp = SelectObject(CaptureDC.DC, CaptureBMP.Bitmap);\n        auto left = -windowrect.ClientBorder.left;\n        auto top = -windowrect.ClientBorder.top;\n\n        if (BitBlt(CaptureDC.DC, left, top, ret.right, ret.bottom, MonitorDC.DC, 0, 0, SRCCOPY) == FALSE) {\n            // if the screen cannot be captured, return\n            SelectObject(CaptureDC.DC, originalBmp);\n            return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED; // likely a permission issue\n        }\n        else {\n\n            BITMAPINFOHEADER bi;\n            memset(&bi, 0, sizeof(bi));\n\n            bi.biSize = sizeof(BITMAPINFOHEADER);\n\n            bi.biWidth = Width(ret);\n            bi.biHeight = -Height(ret);\n            bi.biPlanes = 1;\n            bi.biBitCount = PixelStride * 8; // always 32 bits damnit!!!\n            bi.biCompression = BI_RGB;\n            bi.biSizeImage = ((Width(ret) * bi.biBitCount + 31) / (PixelStride * 8)) * PixelStride * Height(ret);\n            GetDIBits(MonitorDC.DC, CaptureBMP.Bitmap, 0, (UINT)Height(ret), NewImageBuffer.get(), (BITMAPINFO *)&bi, DIB_RGB_COLORS);\n            SelectObject(CaptureDC.DC, originalBmp);\n            ProcessCapture(Data->WindowCaptureData, *this, selectedwindow, ret);\n        }\n\n        return Ret;\n    }\n} // namespace Screen_Capture\n} // namespace SL"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/windows/GDIMouseProcessor.cpp",
    "content": "#include \"GDIMouseProcessor.h\"\n#include \"GDIHelpers.h\"\n\nnamespace SL {\n    namespace Screen_Capture {\n\n        DUPL_RETURN GDIMouseProcessor::Init(std::shared_ptr<Thread_Data> data) {\n            auto Ret = DUPL_RETURN_SUCCESS;\n            MonitorDC.DC = GetDC(NULL);\n            CaptureDC.DC = CreateCompatibleDC(MonitorDC.DC);\n\n            if (!MonitorDC.DC || !CaptureDC.DC) {\n                return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED;\n            }\n            Data = data;\n            return Ret;\n        }\n\n        DUPL_RETURN GDIMouseProcessor::ProcessFrame()\n        {\n            auto Ret = DUPL_RETURN_SUCCESS;\n            CURSORINFO cursorInfo;\n            memset(&cursorInfo, 0, sizeof(cursorInfo));\n            cursorInfo.cbSize = sizeof(cursorInfo);\n\n            if (GetCursorInfo(&cursorInfo) == FALSE) {\n                return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED;\n            }\n            if (!(cursorInfo.flags & CURSOR_SHOWING)) {\n                return DUPL_RETURN_SUCCESS;// the mouse cursor is hidden no need to do anything.\n            }\n            ICONINFOEXA ii;\n            memset(&ii, 0, sizeof(ii));\n            ii.cbSize = sizeof(ii);\n            if (GetIconInfoExA(cursorInfo.hCursor, &ii) == FALSE) {\n                return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED;\n            }\n            HBITMAPWrapper colorbmp, maskbmp;\n            colorbmp.Bitmap = ii.hbmColor;\n            maskbmp.Bitmap = ii.hbmMask;\n            HBITMAPWrapper bitmap;\n            bitmap.Bitmap = CreateCompatibleBitmap(MonitorDC.DC, MaxCursurorSize, MaxCursurorSize);\n\n            auto originalBmp = SelectObject(CaptureDC.DC, bitmap.Bitmap);\n            if (DrawIcon(CaptureDC.DC, 0, 0, cursorInfo.hCursor) == FALSE) {\n                return DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED;\n            }\n\n            ImageRect ret;\n            ret.left = ret.top = 0;\n            ret.bottom = MaxCursurorSize;\n            ret.right = MaxCursurorSize;\n\n            BITMAPINFOHEADER bi;\n            memset(&bi, 0, sizeof(bi));\n\n            bi.biSize = sizeof(BITMAPINFOHEADER);\n            bi.biWidth = ret.right;\n            bi.biHeight = -ret.bottom;\n            bi.biPlanes = 1;\n            bi.biBitCount = PixelStride * 8; //always 32 bits damnit!!!\n            bi.biCompression = BI_RGB;\n            bi.biSizeImage = ((ret.right * bi.biBitCount + 31) / (PixelStride * 8)) * PixelStride* ret.bottom;\n\n            GetDIBits(MonitorDC.DC, bitmap.Bitmap, 0, (UINT)ret.bottom, NewImageBuffer.get(), (BITMAPINFO*)&bi, DIB_RGB_COLORS);\n\n            SelectObject(CaptureDC.DC, originalBmp);\n\n            auto wholeimg = Create(ret, PixelStride, 0, NewImageBuffer.get());\n\n            //need to make sure the alpha channel is correct\n            if (ii.wResID == 32513) { // when its just the i beam\n                auto ptr = (unsigned int*)NewImageBuffer.get();\n                for (auto i = 0; i < RowStride(wholeimg) *Height(wholeimg) / 4; i++) {\n                    if (ptr[i] != 0) {\n                        ptr[i] = 0xff000000;\n                    }\n                }\n            }\n            //else if (ii.hbmMask != nullptr && ii.hbmColor == nullptr) {// just \n            //\tauto ptr = (unsigned int*)NewImageBuffer.get();\n            //\tfor (auto i = 0; i < RowStride(*wholeimg) *Height(*wholeimg) / 4; i++) {\n            //\t\tif (ptr[i] != 0) {\n            //\t\t\tptr[i] = ptr[i] | 0xffffffff;\n            //\t\t}\n            //\t}\n            //}\n            int lastx = static_cast<int>(cursorInfo.ptScreenPos.x - ii.xHotspot);\n            int lasty = static_cast<int>(cursorInfo.ptScreenPos.y - ii.yHotspot);\n\n            if (Data->ScreenCaptureData.OnMouseChanged || Data->WindowCaptureData.OnMouseChanged) {\n                //if the mouse image is different, send the new image and swap the data \n                if (memcmp(NewImageBuffer.get(), OldImageBuffer.get(), bi.biSizeImage) != 0) {\n                    if (Data->WindowCaptureData.OnMouseChanged) {\n                        Data->WindowCaptureData.OnMouseChanged(&wholeimg, Point{ lastx, lasty });\n                    }\n                    if (Data->ScreenCaptureData.OnMouseChanged) {\n                        Data->ScreenCaptureData.OnMouseChanged(&wholeimg, Point{ lastx, lasty });\n                    }\n                    std::swap(NewImageBuffer, OldImageBuffer);\n                }\n                else if (Last_x != lastx || Last_y != lasty) {\n                    if (Data->WindowCaptureData.OnMouseChanged) {\n                        Data->WindowCaptureData.OnMouseChanged(nullptr, Point{ lastx, lasty });\n                    }\n                    if (Data->ScreenCaptureData.OnMouseChanged) {\n                        Data->ScreenCaptureData.OnMouseChanged(nullptr, Point{ lastx, lasty });\n                    }\n                }\n            }\n\n            Last_x = lastx;\n            Last_y = lasty;\n            return Ret;\n        }\n\n    }\n}"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/windows/GDIWindowProcessor.cpp",
    "content": "#include \"GDIWindowProcessor.h\"\n\nnamespace SL {\n    namespace Screen_Capture {\n\n\n\n\n\n    }\n}"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/windows/GetMonitors.cpp",
    "content": "#include \"SCCommon.h\"\n#include \"ScreenCapture.h\"\n#define NOMINMAX\n#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers\n#include <Windows.h>\n\nnamespace SL {\nnamespace Screen_Capture {\n\n    std::vector<Monitor> GetMonitors()\n    {\n        std::vector<Monitor> ret;\n        DISPLAY_DEVICEA dd;\n        ZeroMemory(&dd, sizeof(dd));\n        dd.cb = sizeof(dd);\n        for (auto i = 0; EnumDisplayDevicesA(NULL, i, &dd, 0); i++) {\n            // monitor must be attached to desktop and not a mirroring device\n            if ((dd.StateFlags & DISPLAY_DEVICE_ATTACHED_TO_DESKTOP) & !(dd.StateFlags & DISPLAY_DEVICE_MIRRORING_DRIVER)) {\n                DEVMODEA devMode;\n                devMode.dmSize = sizeof(devMode);\n                EnumDisplaySettingsA(dd.DeviceName, ENUM_CURRENT_SETTINGS, &devMode);\n                std::string name = dd.DeviceName;\n                auto mon = CreateDCA(dd.DeviceName, NULL, NULL, NULL);\n                auto xdpi = GetDeviceCaps(mon, LOGPIXELSX);\n                DeleteDC(mon);\n                auto scale = 1.0f;\n                switch (xdpi) {\n                case 96:\n                    scale = 1.0f;\n                    break;\n                case 120:\n                    scale = 1.25f;\n                    break;\n                case 144:\n                    scale = 1.5f;\n                    break;\n                case 192:\n                    scale = 2.0f;\n                    break;\n                default:\n                    scale = 1.0f;\n                    break;\n                }\n                ret.push_back(\n                    CreateMonitor(i, i, devMode.dmPelsHeight, devMode.dmPelsWidth, devMode.dmPosition.x, devMode.dmPosition.y, name, scale));\n            }\n        }\n        return ret;\n    }\n} // namespace Screen_Capture\n} // namespace SL"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/windows/GetWindows.cpp",
    "content": "#include \"GDIHelpers.h\"\n#include \"SCCommon.h\"\n#include \"ScreenCapture.h\"\n\n#define NOMINMAX\n#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers\n#include <Windows.h>\n\nnamespace SL {\nnamespace Screen_Capture {\n\n    struct srch {\n        std::vector<Window> Found;\n    };\n    BOOL CALLBACK EnumWindowsProc(_In_ HWND hwnd, _In_ LPARAM lParam)\n    {\n        Window w;\n        char buffer[sizeof(w.Name)];\n        GetWindowTextA(hwnd, buffer, sizeof(buffer));\n        srch *s = (srch *)lParam;\n        std::string name = buffer;\n        w.Handle = reinterpret_cast<size_t>(hwnd);\n        auto windowrect = SL::Screen_Capture::GetWindowRect(hwnd);\n        w.Position.x = windowrect.ClientRect.left;\n        w.Position.y = windowrect.ClientRect.top;\n        w.Size.x = windowrect.ClientRect.right - windowrect.ClientRect.left;\n        w.Size.y = windowrect.ClientRect.bottom - windowrect.ClientRect.top;\n        memcpy(w.Name, name.c_str(), name.size() + 1);\n        s->Found.push_back(w);\n        return TRUE;\n    }\n\n    std::vector<Window> GetWindows()\n    {\n        srch s;\n        EnumWindows(EnumWindowsProc, (LPARAM)&s);\n        return s.Found;\n    }\n\n} // namespace Screen_Capture\n} // namespace SL"
  },
  {
    "path": "ThirdParty/screen_capture_lite - Copy/src/windows/ThreadRunner.cpp",
    "content": "#include \"ScreenCapture.h\"\n#include \"DXFrameProcessor.h\"\n#include \"GDIFrameProcessor.h\"\n#include \"GDIMouseProcessor.h\"\n#include \"GDIWindowProcessor.h\"\n#include \"ThreadManager.h\"\n\n#define NOMINMAX\n#define WIN32_LEAN_AND_MEAN\n#include <windows.h>\n\n#include <memory>\n#include <string>\n#include <iostream>\n\nnamespace SL {\n    namespace Screen_Capture {\n\n\n        template<class T>void ProcessExit(DUPL_RETURN Ret, T* TData) {\n            if (Ret != DUPL_RETURN_SUCCESS)\n            {\n                if (Ret == DUPL_RETURN_ERROR_EXPECTED)\n                {\n                    // The system is in a transition state so request the duplication be restarted\n                    TData->CommonData_.ExpectedErrorEvent = true;\n                }\n                else\n                {\n                    // Unexpected error so exit the application\n                    TData->CommonData_.UnexpectedErrorEvent = true;\n                }\n            }\n        }\n        template<class T>bool SwitchToInputDesktop(const std::shared_ptr<T> data) {\n            HDESK CurrentDesktop = nullptr;\n            CurrentDesktop = OpenInputDesktop(0, FALSE, GENERIC_ALL);\n            if (!CurrentDesktop)\n            {\n                // We do not have access to the desktop so request a retry\n                data->CommonData_.ExpectedErrorEvent = true;\n                ProcessExit(DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED, data.get());\n                return false;\n            }\n\n            // Attach desktop to this thread\n            bool DesktopAttached = SetThreadDesktop(CurrentDesktop) != 0;\n            CloseDesktop(CurrentDesktop);\n            CurrentDesktop = nullptr;\n            if (!DesktopAttached)\n            {\n                // We do not have access to the desktop so request a retry\n                data->CommonData_.ExpectedErrorEvent = true;\n                ProcessExit(DUPL_RETURN::DUPL_RETURN_ERROR_EXPECTED, data.get());\n                return false;\n            }\n            return true;\n        }\n        void RunCaptureMouse(std::shared_ptr<Thread_Data> data) {\n            if (!SwitchToInputDesktop(data)) return;\n            TryCaptureMouse<GDIMouseProcessor>(data);\n        } \n        void RunCaptureMonitor(std::shared_ptr<Thread_Data> data, Monitor monitor) {\n            //need to switch to the input desktop for capturing...\n            if (!SwitchToInputDesktop(data)) return;\n#if defined _DEBUG || !defined NDEBUG\n            std::cout << \"Starting to Capture on Monitor \" << Name(monitor) << std::endl;\n            std::cout << \"Trying DirectX Desktop Duplication \" << std::endl;\n#endif\n            if (!TryCaptureMonitor<DXFrameProcessor>(data, monitor)) {//if DX is not supported, fallback to GDI capture\n#if defined _DEBUG || !defined NDEBUG\n                std::cout << \"DirectX Desktop Duplication not supprted, falling back to GDI Capturing . . .\" << std::endl;\n#endif\n                TryCaptureMonitor<GDIFrameProcessor>(data, monitor);\n            }\n        }\n\n        void RunCaptureWindow(std::shared_ptr<Thread_Data> data, Window wnd) {\n            //need to switch to the input desktop for capturing...\n            if (!SwitchToInputDesktop(data)) return;\n            TryCaptureWindow<GDIFrameProcessor>(data, wnd);\n        }\n    }\n}\n\n\n\n"
  },
  {
    "path": "_config.yml",
    "content": "theme: jekyll-theme-minimal"
  }
]